Posts tagged about-us

Our Customers

We have a proven track record of delivering fast, relevant search results. Nearly 175 Federal and District of Columbia government agencies currently use to power their search box and improve their visitors’ search experience.

Homeland Security
Homeland Security

Our History got its start when Internet entrepreneur Eric Brewer, whose early research was funded by the Department of Defense, offered to donate a powerful search engine to government. That gift helped accelerate the government’s earlier work to create a government-wide portal.

In June 2000, President Clinton announced the gift from the Federal Search Foundation, a nonprofit organization established by Brewer, and instructed that an official U.S. web portal be launched within 90 days. went online on September 22, 2000 under the name with a prominent search box to allow the public to search across government websites. Visit to learn more about its mission and history.

In 2010, moved to open-source software and became available to agencies across government in the United States to power the search boxes on their site. Today we power nearly 300 million queries a year through over 1,800 federal search configurations.

Page last reviewed or updated:

Terms of Service

The following terms of service (“Terms”) governing the General Services Administration’s (GSA) website and services, including the content, documentation, code and related materials, are offered subject to your acceptance of the Terms as well as any relevant sections of the DigitalGov Site Policies (collectively, the “Agreement”). Access to or use of services or its content constitutes acceptance to this Agreement.

Data Collection and Use

Data Collection

  • When you use, we store information about searches on your site, including: the web page from which searchers accessed your search results, the date and time, the words searched for, items clicked on a page, and standard HTTP request data, including the browser and operating system used. We periodically delete our search logs.
  • We use this information to measure the usage of our website and services and to identify system performance or problem areas. We also use this information to help us develop the service, analyze patterns of usage, and to make the service more useful. This information is not used for associating search terms or patterns of site navigation with individual users. We may anonymize and provide this information to third-party entities for the purposes of analyzing search traffic.

Secondary Use

  • Customers using our Search Results API cannot cache results.
  • Data accessed through our website and services does not, and should not, include controls over its end use.
  • Once the data has been downloaded from, we can’t vouch for the quality or timeliness of any analyses conducted with data retrieved.

Citing Data

Customers using the Search Results API must display the Powered by Bing [PDF] logo on ‘web’ and ‘image’ search results pages for attribution of these results. Customers using any of the other three indexes (‘docs’, ‘news’, and ‘videonews’) must display “Powered by” (using plain text or our logo for for attribution instead of the Bing logo.

Source Code

Use of Open Source Software uses open-source software and free or low cost, commercial application programming interfaces when it best meets the needs and mission of GSA.

Redistribution of Code

  1. Software source code written entirely by GSA staff, and by contractors who are developing software on behalf of GSA, is by default a public domain work.
  2. Software source code previously released under an open source license and then modified by GSA staff is considered a “joint work.” It is partially copyrighted, partially public domain, and as a whole is protected by the copyrights of the non-government authors and must be released according to the terms of the original open-source license.
  3. All source code as defined above may be shared with the general public via a highly visible, easily accessible online source code community (such as Github) that facilitates the code’s reuse. Source code won’t be released if any of the following conditions are met:
  • The author of the code determines that the code is too crude to merit distribution or provide value to the broader community.
  • The Government doesn’t have the rights to reproduce and release the item. The Government has public release rights when the software is developed by Government personnel, when the Government receives “unlimited rights” in software developed by a contractor at Government expense, or when pre-existing OSS is modified by or for the Government.
  • The public release of the item is restricted by other law or regulation, such as the Export Administration Regulations or the International Traffic in Arms Regulation.
  • GSA cybersecurity staff determine that the public release of such code would pose an unacceptable risk to GSA’s operational security.

Modification or False Representation of Content

You may not modify or falsely represent content accessed through yet still claim the source is

Right to Limit

Users of the website and services must have a valid government email address from a federal or District of Columbia government agency. If GSA reasonably believes that you are not a federal or District of Columbia government employee, or a contractor acting within the scope of its contract with the federal or District of Columbia government, GSA may permanently block your use of the website and services.

Use of the APIs may be subject to certain limitations on access, calls, or use as set forth within this Agreement or otherwise provided by GSA. If GSA reasonably believes that you have attempted to exceed or circumvent these limits, your ability to use the API may be permanently or temporarily blocked.

GSA may monitor your use of its services to improve the service or to ensure compliance with this Agreement.

Service Termination

If you wish to terminate this Agreement, you may do so by refraining from further use of the website and services. GSA reserves the right (though not the obligation) to (1) refuse to provide the services to you if it is GSA’s opinion that use violates any GSA policy, or (2) terminate or deny you access to and use of all or part of the services at any time for any other reason in its sole discretion. Any hosted applications may also be shut down or removed. All provisions of this Agreement which by their nature should survive termination shall survive termination including, without limitation, warranty disclaimers, indemnity, and limitations of liability.


GSA reserves the right, at its sole discretion, to modify or replace this Agreement, in whole or in part. Your continued use of or access to the services following posting of any changes to this Agreement constitutes acceptance of those modified terms. GSA may, in the future, offer new services and/or features. Such new features and/or services shall be subject to the terms and conditions of this Agreement.

Disclaimer of Warranties services are provided “as is” and on an “as-available” basis. GSA hereby disclaims all warranties of any kind, express or implied, including without limitation the warranties of merchantability, fitness for a particular purpose, and non-infringement. GSA makes no warranty that the services will be error free or that access thereto will be continuous or uninterrupted.

Limitations on Liability

In no event will GSA be liable with respect to any subject matter of this Agreement under any contract, negligence, strict liability or other legal or equitable theory for: (1) any special, incidental, or consequential damages; (2) the cost of procurement of substitute products or services; or (3) for interruption of use or loss or corruption of data.

General Representations

You hereby warrant that (1) your use of the website and services will be in strict accordance with the Agreement and all applicable laws and regulations, and (2) your use of the website and services will not infringe or misappropriate the intellectual property rights of any third party.


This Agreement constitutes the entire Agreement between GSA and you concerning the use of the website and services, and may only be modified by the posting of a revised version on this page by GSA.


Any disputes arising out of this Agreement and access to or use of the services shall be governed by federal law.

No Waiver of Rights

GSA’s failure to exercise or enforce any right or provision of this Agreement shall not constitute waiver of such right or provision.

Page last reviewed or updated:

Search Engine Optimization for Government Websites

On June 10, 2014, the Metrics Community of Practice of the Federal Web Managers Council and DigitalGov University hosted an event to honor the memory of Joe Pagano, a former co-chair of the Web Metrics Sub-Council.

This third lecture honoring Joe focused on search engine optimization (SEO).

While commercial search engines do a remarkable job of helping the public find our government information, as web professionals, it’s also our job to help the public make sense of what they find.

Ammie Farraj Feijoo, our program manager, presented on SEO for government websites and specifically talked about:

  • What SEO is and why it is important;
  • SEO building blocks for writing content;
  • Conducting keyword research; and
  • Eliminating ROT (redundant, outdated, and trivial content).

Download the slide deck [PDF] and visit the resources below to learn more.

Webmaster Tools

A Few (of Many) SEO Resources

Page last reviewed or updated:

Search Is the New Big Data

Search is easy, right? You type a term in a search box and the exact page you’re looking for appears at the top of the list of results. But search is hard and has many shades of grey.

On April 10, 2014, Loren Siebert, our senior search architect, presented on:

  • Complexities of recall and precision,
  • Popular open source search technologies, and
  • “Search magic” like stemming, synonyms, fuzziness, and stopwords.

Download the slide deck and visit the resources below to learn more.

Page last reviewed or updated:

Our Open Source Strategy

We keep an eye on on our what our government counterparts are up to, both in the U.S. and other countries. We first came across Gov.UK’s philosophy on and approach to coding in the open a couple of years ago. It caught our attention and we realized we should also articulate our open source strategy.

Use and Contribute to Open Source Projects

Since 2010, we’ve embraced and leveraged open source software to build our site search service for federal, state, and local government websites. This use of open source has allowed us to experience enormous growth over the past few years. In July 2014 alone, over 23 million searchers received results from our service—a five-fold increase since July 2010.

Our search service is now a complex system made up of many moving parts, including providing type-ahead search suggestions, serving search results, fetching, indexing, and caching content, and providing analytics.

Each of these parts is compiled into our codebase and, as we use open source components for our system, we contribute back to the projects.

Code in the Open

We recently began to unravel our monolithic codebase so that we can share individual pieces of our code. To borrow the phrase from Gov.UK, we’re now coding in the open.

We recently released the code for our social media image, jobs and recalls API servers. They’re our first foray into coding in the open. The source code for these API servers is in our GitHub repo and is available for anyone to see and contribute to.

The data products for the jobs and recalls code are also open and available for anyone to consume on our Developer hub.

These three servers and their underlying data now operate outside of our core search codebase.

Following this same model, moving forward, we plan to:

  • Share first—For every new feature, we’ll write the code so that anyone can make use of the code, not just us. If the public community contributes to the codebase, we’ll be able to improve this feature without taxing our developers.
  • Expose APIs—We’ll expose our data products as APIs so that anyone can make use of the data, not just searchers on a government websites.
  • Be our own customer—We’ll use our own public code and data just like everyone else. We’ll call our own API servers to integrate the data within our search results pages.

Make Things Open to Make Things Better

We agree with Gov.UK that “to make things open makes things better.”

We have finite resources and we don’t want to lose our focus on serving our agency customers and improving visitors’ search experience on government websites. So, we won’t be spending a lot of time to build or support a vibrant community around our code.

That said, we hope that exposing the pieces of our system will be useful to someone somewhere. We’ll continue to provide the “ingredients” of our search service so that others will be able to make use of the code and data in ways that we could never imagine.

And, We’re Not Alone

We’re not alone. Other federal agencies have embraced the approach of coding in the open and have GitHub repos. Below are a just a few of our many favorites.

Page last reviewed or updated:

Our Redesign: Before and After

Bing (External link), DuckDuckGo (External link), Google (External link), and Yahoo (External link) have all rolled out major redesigns to their search results pages in the past year.

The last time we did a major redesign of our results page was in January 2012. It was long overdue for a facelift.

So, we’ve redesigned our search results page. We’ve kept an eye on best practices in the search industry and what media websites (like and are up to, But, we’re not simply following the leaders. We’ve also analyzed our search data to make data-driven decisions that, ultimately, aim to improve searchers’ experience on your site.

Below are some of the highlights.

Basic Search Results

We started with the basic search results because they’re the meat of the page. They also have the highest clickthru rate among the various items shown on the page. Here’s the before and after for a search on passport on

Search result for passport on

We’ve done away with the underlining on the title. The snippet is wider and has more space. Clicking or touching anywhere on the snippet (not just the title) opens the link.

Spelling Suggestions

Spelling suggestions have the second highest clickthru rate so we focused on those next. Here’s the before and after for a search on manual noriega (sic) on

Spelling suggestion for manual noriega (sic) on

We’ve done away with the wordiness and boldface. The suggestion and its corresponding option to override the automatic correction now use plain language and are shown on two lines. We now only use boldface to highlight keyword matches for searchers’ queries.

Search Verticals

Through usability tests, we found that searchers didn’t see the options to change the scope of their results in the left-hand column. Data also supports this finding as very, very few searchers clicked on these options. Here’s the before and after for a search on jobs on

Search verticals on

The new format clusters all options to narrow, broaden, or change the scope of the results together, and places these options directly above the search box. Searchers see up to three options and have the ability to show More …. The old format separated them, showing the option to search on related sites in the right-hand column above the box and verticals in the left-hand column below the box.

Once visitors have opted to type a term in the search box, they’re in search mode. Some have already tried and abandoned browsing the website.

To reflect visitors’ shift from browsing to searching, search results pages on many commercial media websites highlight the search verticals and either remove or downplay the site’s navigation. Here’s the before and after for a search on veterans affairs on

Site brand and navigation on

The new format prominently shows the search verticals above the search box. Searchers can click the logo to return to homepage, or expand the “hamburger” menu to view additional options to browse the site. Searchers can also browse the site using links in the footer.


Eye tracking studies have shown searchers gravitate to video thumbnails. Snippets are also very important to increase clickthru rates. Searchers are more likely to view a video if they understand how it relates to what they’re looking for. Here’s the before and after for a search on careers on

Video search result for careers on

The new video format shows one video plus a clearly labeled “More videos about …” link. It also shows the video’s source (in this case YouTube), the length of the video, and a short snippet. The video is now separated from basic search results by thin grey lines.

Searchers now see inline videos from the past 13 months only. (It used to be forever.) There’s a spike in video views at around one year so this time period allows searchers to see recent inline videos for annual events like tax season, national health observances, and holidays.


Here’s the before and after for a search on tornadoes on

News search results for tornado on

The new news format now matches the look of the basic search results to provide a more seamless experience.

Searchers now see inline news from the last four months only. (It used to be forever, which showed stale content from several years ago in some cases.) Very recent news (less than five days) shows above the basic search results. Less recent but still timely news (greater than five days and less than four months) shows below the basic search results. The old format always showed news results after the third web result.


Here’s the before and after for a search on social security on

Tweet for social security on

The new tweet format shows one tweet inline with the results. It now shows the agency’s Twitter profile image and the tweet is separated from basic search results by thin grey lines and doesn’t have a separate heading. As tweets tend to be very time-sensitive, searchers now see tweets from the last three days only. (It used to be 30 days.) The old format showed tweets in a right-hand column that was often mistaken for an ad.

Job Openings

Here’s the before and after for a search on communications jobs on

Job results for communications jobs on

The new job opening format shows up to three job openings inline with the results. Searchers can now click or touch the down arrow to see up to 10 listings inline. For federal agencies, it now shows the USAJobs logo to clearly identify that as the data source. The job openings are separated from basic search results by thin grey lines. The old format showed jobs in a right-hand column that was often mistaken for an ad and didn’t give searchers the option to see more listings on the results page.

Health Topics

Here’s the before and after for a search on diabetes on

Health topic result for diabetes on

The new health topics format shows a shorter, more concise snippet and the snippet doesn’t have any embedded links. The result now takes up less vertical space as the links for related topics and clinical trials are each listed in a line. The MedlinePlus logo now links to

By Government, for Government

Our search results page now ensures the public has a common search experience anywhere, anytime, and on any device. This meets a key objective of the federal government’s Digital Government Strategy.

It has also been tested to ensure it is accessible to people with disabilities.

Are You Ready to Turn It On?

Are you ready to turn on the new results page? Tell us if you’re ready now, or if you need some more time or help.

Read our favorite tips to see how other sites have customized their results pages and for some ideas on how to brand the results page for your site.

Page last reviewed or updated:

I Didn't Try to Grow a Bigger Ox: How I Found Hadoop

A year ago I rolled my first Hadoop system into production. Since then, I’ve spoken to quite a few people who are eager to try Hadoop themselves in order to solve their own big data problems. Despite having similar backgrounds and data problems, few of these people have sunk their teeth into Hadoop. When I go to Hadoop Meetups in San Francisco, I often meet new people who are evaluating Hadoop and have yet to launch a cluster. Based on my own background and experience, I have some ideas on why this is the case.

I studied computer science in school and have worked on a wide variety of computer systems in my career, with a lot of focus on server-side Java. I learned a bit about building distributed systems and working with large amounts of data when I built a pay-per-click (PPC) ad network in 2004. The system is still in operation and at one point was handling several thousand searches per second. As the sole technical resource on the system, I had to educate myself very quickly about how to scale up.

As I contemplated how doomed I would be should traffic levels increase much more, I remember wondering to myself, “How does Google deal with all that data?” The answer came to me in the form of the Google File System (GFS) paper and later the MapReduce paper, both from Google. It dawned on me that because Google was forced to solve a much larger problem, they had come up with an elegant solution for a whole range of more modest data problems running on commodity hardware. But it wouldn’t be until 2010 that I would get to work with this technology firsthand.

As I wrote in an earlier article, I started re-architecting DigitalGov Search, the U.S. government’s search system, in 2009 based on a solution stack of free, open source software including Ruby on Rails, Solr, and MySQL. A wave of déja vu hit me as I started worrying about what to do with the growing mountain of data piling up in MySQL and our increasing need to analyze it in different ways. I had heard that a new company called Cloudera, founded by some big data people from Yahoo!, Google, and Facebook, was making Hadoop available for the masses in a reliable distribution, much in the same way that RedHat did for Linux. Curiosity got the best of me and I bought the newly minted Hadoop: The Definitive Guide from O’Reilly. The most insightful part of the book to me was the very first sentence. It’s a quote from Grace Hopper: “In pioneer days, they used oxen for heavy pulling, and when one ox couldn’t budge a log, they didn’t try to grow a bigger ox.” I didn’t want to grow a bigger server; I wanted to harness a bunch of small servers together to work in unison. The more I learned the more curious I got, so I started reading more. And that’s when I hit my first roadblock.

I think people who have been working with Hadoop technologies for years and years sometimes forget just how rich and diverse the big data software ecosystem has become, and how daunting it can be to folks approaching it for the first time. When people at the Meetups say they are evaluating solutions to their data scaling problem, the answers they hear sound something like this: “Just use Hadoop Hive Pig Mahout Avro HBase Cassandra Oozie Sqoop Flume ZooKeeper Cascading NoSQL RCFile. Oh, almost forgot…cloud.”

The thought of wading through all of that just to learn about what I needed to learn about was a bit too overwhelming for me, so I put the whole matter aside for a few months. Over time, I started to dive into each of these projects to understand the primary use case, how active the developer community was and which organizations were using it in production. I converged on the idea of using Hive as a warehouse for our data. I opted for Cloudera’s distribution since I wanted to reduce the risk of running into compatibility issues between all the various subsystems. Having tracked down anomalies in a highly multi-threaded and contentious distributed Java system before, I liked the idea of someone else taking on that problem for me.

At some point, I had read everything I could read and grew impatient to get my hands dirty, so I decided to just download CDH3 on my laptop and give it a try. The tutorial instructions for the standalone version worked, and I successfully computed more digits of pi than I ever thought I’d need. After creating some sample data in Hive and running a few queries, I felt pretty confident that Hive would be the right tool for the job. I just needed to find somewhere to install and run HDFS (namenode, secondary namenode, and data nodes), Hadoop (jobtracker and tasktracker nodes), Hive, and Hue for a nice front end to it all.

I knew from my past experience how to stretch the limits of CPU, disk, IO, and memory on commodity servers, and I identified a few potential servers at our primary datacenter with resources I figured I could leverage. Once again I followed the tutorial instructions, this time for the fully distributed version of CDH3, and once again I started to compute pi. And that’s when I hit my second roadblock. It took me a few days to figure out that I had a problem with DNS. Each machine needs to be able to resolve every other machine’s name and IP in the cluster. Whether you do that via /etc/hosts or a local DNS server is up to you, but it needs to happen or the whole thing gets wedged. Once I got that sorted out, everything just started falling into place and I had Hive working in production within a few days. A week later, I started pulling out the MySQL jobs and deleting big tables, and that’s been the trend ever since.

Over time, I’ve gone on to learn about using custom Ruby mappers in Hive, moving data back and forth between MySQL and Hive with Sqoop, and getting the data into HDFS in real-time with Flume. All of these components from the Cloudera distribution are working nicely in our production environment now, and I sleep well at night knowing I have such a solid, deliberate plan for growth. My initial investment in learning about the Hadoop ecosystem is really paying dividends, but when I think about all those people at the Meetups stuck in evaluation mode, I feel their pain. Does it have to be such a struggle?

The big challenge in my opinion is not that any one piece of the puzzle is too difficult. Any reasonably smart (or in my case stubborn) engineer can set themselves on the task of learning about a new technology once they know that it needs to be learned. The challenge with the Hadoop ecosystem is that it presents the newbie with the meta-problem of figuring out which of these tools are appropriate for their use case at all, and whether or not to even consider the problem today versus deferring it until later. In a way Facebook has it easy, because when you are adding 15TB of data per day, that decision is pretty much made for you.

For all the companies sitting in the twilight between the gigabyte and the petabyte who don’t have Hadoop expertise in-house, there is a collection of free information to help guide people to the right solution space (Hadoop Tutorial, White Papers). These days, when I talk to people who are evaluating solutions to their big data problems, my advice to them is to break down their problems into a few discrete use cases and then work on ferreting out the technologies that are designed for that use case. Get a proof of concept to demonstrate that the technology can address your use case and convince yourself and others that you’re on the right track. Work toward putting something simple into production. Lather, rinse, and repeat. I am still in that cycle myself, as these days I’m exploring HBase and OpenTSDB to give me low-latency access to time series data and Mahout to do frequent item set mining, but that’s another article for another day.

This post is cross-posted from Cloudera(External link)

Page last reviewed or updated:

Cache Me If You Can

Slowness Hurts Web Pages

Have you ever been frustrated when visiting a web page that doesn’t load quickly? Have you ever left a slow web page before it finished loading? You’re not alone.

Several recent studies have quantified customers’ frustration with slow web pages. Customers now expect results in the blink of an eye(External link) This expection means that your customers are won or lost in one second(External link) A one second delay in loading a web page equals 11% fewer page views, 16% decrease in customer satisfaction, and 7% loss in conversions.

Slowness Kills Search Results Pages

As little time as web sites have to keep users on their pages, search engines have even less time to keep searchers on their results pages. Speed is the primary factor in determining customers’ satisfaction with search results.

Google, Microsoft, and Yahoo garner 95% of the search market(External link) Google garners two-thirds of the search market. The company’s Gospel of Speed (External link) motto is one reason why Google garners the majority of the market.

This gospel has also set a high bar for all search engines. Searchers expect results pages to load very, very quickly.

How We’ve Made Our Result Pages Load Faster

So, when we established the service’s open source architecture in 2010, the first thing we tackled was how to deliver our search results in under one second.

At around the same time, Github (External link) was experiencing exponential growth and the company’s engineers were blogging about what they did to make Github fast. To get up to speed quickly (yes, bad pun intended), we read their posts.

Leveraging some of Github’s best practices, we succeeded in delivering our results in under 700 milliseconds, on average. This was a significant accomplishment and improvement from the previous vendor-owned and -operated iterations of our service.

Over the past three years, we’ve dug in and improved our response time even more. We now deliver our results in under 380 milliseconds, on average.

App server response times

We already had an architecture optimized for speed. So, how have we sped it up by 320 milliseconds?

We Cache When We Can

When a searcher enters a query, we go out to our various indexes, pull the information relevant to the searcher’s request, and put that information together on the results page.

Most queries (such as jobs, obama, unclaimed money, forms) aren’t unique and are asked by thousands of searchers each day.

We cache these so-called short head queries and store them on our servers. Caching helps us speed up the above process because searchers don’t have to wait for us to pull the information from its original source.

We Use an Asset Pipeline

We have many JavaScript and CSS files on our results pages. These “assets” can be large and slow down the loading of our page. So, we use an asset pipeline (External link) to concatenate and compress our JavaScript and CSS assets thereby reducing the number of requests that a browser has to make to render our results page.

We also use fingerprinting—a technique that makes a file’s name dependent on its content—within our asset pipeline. When the content changes, the name changes. For content that is static or that changes infrequently, this naming helps us tell whether two versions of a file are identical. When a filename is unique, browsers keep their own copy of the content. When the content is updated, the fingerprint changes so browsers request a new copy of the content. This approach allows us to maximize our content delivery network.

We Use a Content Delivery Network

Our static content (such as scripts and stylesheets) gets served through our content delivery network (External link) provider, currently Akamai. Akamai serves our static content from its server that is geographically closest to the searcher. The closer, the faster.

Using a content delivery network also allow us to optimize our service’s speed by:

  • Directing non-cached traffic between our two datacenters to create a multihomed environment. Multihoming allows us to make full use of all of our servers. By contrast, in 2010, our disaster recovery datacenter often sat idle.
  • Reducing our need to add bandwidth or servers to handle short-term traffic spurts, such as spurts related to natural disasters.
  • Protecting against denial of service attacks by spotting them before they reach our servers.

What’s Next?

We’ve worked hard over the past three years to speed up the delivery of our results by optimizing each link in the chain.

We use several monitoring tools to measure our system’s performance. The quality of these tools is improving at a rapid pace, which in turn, shows us where and how we can improve our service.

We regularly ask ourselves, “Will this shave some time off and help us deliver our results in under 380 milliseconds?”

Page last reviewed or updated:

Six legacy domains are expiring

As part of the federal .gov web reform project, we’re eliminating six of our legacy domains. Going forward, our only supported domain is (or, if you’ve requested DNS masking).

What do you need to do? If your URL starts with any of the following six legacy domains, you must update your HTML form code.

  4. (Spanish)
  5. (Spanish)
  6. (Spanish)

Specifically, you have to update the action of your form code to call (or

<form method="get" action="">

Note that, if you don’t update your form code, your search results page will no longer work.

Page last reviewed or updated:

DigitalGov Search Wins Government Big Data Solutions Award

DigitalGov Search (formerly USASearch) is the winner of the 2011 Government Big Data Solutions Award, announced at Hadoop World in New York City on November 8, 2011.

The Big Data Award was established to highlight innovative solutions and to facilitate the exchange of best practices, lessons learned, and creative ideas for addressing Big Data challenges. The Award judges saw DigitalGov Search as a great example of solving Big Data problems to improve government agility and to provide better service for less.

In line with the GSA’s cost-saving “build once, use many times” paradigm, DigitalGov Search provides hosted search services for and hundreds of other government websites. This is done in a cost-effective way, especially for the agencies involved, which receive these services at no cost.

From the Award presentation:

The GSA is to be congratulated for their mission-focused, citizen-centered, open approach to a big data challenge and a resulting solution that improves the experience of a broad swath of users of federal services. On behalf of our judges and the many citizens who use this capability on a daily basis we say thank you, and congratulations on this well deserved recognition.

Page last reviewed or updated: