By Venky Ganti, Ph.D.
Published on 2020年2月20日
Among many challenges in working with big data, the 3V’s (Volume, Velocity, and Variety) have gotten a lot of attention. Googling yields many results worth reading. Almost all of these focus on technological challenges in managing and processing big data. In this post, I would like to highlight a different set issues that make working with big data challenging, even if the underlying infrastructure is admirably able to handle all three V’s.
At Google1, I had the opportunity to work within an amazing engineering team. I learnt various aspects of running services at scale as well as developing and launching compelling data products. I worked on the Dynamic Search Ads product which automates the AdWords campaign setup and optimization. Given an advertiser’s website, our goal was to mine relevant keywords, and for each keyword automatically create an advertisement (the ad text as well as the landing page). I worked with data from a variety of data sources, often for improving our product and sometimes for debugging issues.
We all know that Google organizes all of the information on the web and enables users to quickly find relevant information. But, how do many engineers feel about working with data at Google?
On the upside, they feel empowered in working with the rich data that Google collects from the huge amount of user activity on its property. Google’s data infrastructure ranks among the best out there. This is the place where many of the modern ideas of storing and processing “big data” originated. Combining these with a high calibre of engineers, a natural outcome is the creation of a massive number of information-rich derivative datasets.
On the down side, I think we could have been more effective and efficient with respect to finding and understanding data. Let me articulate some of the issues that contributed to these inefficiencies.
How do I find data that I can use for my current purpose? How do I understand the contents of a dataset after I find something?
Who do I ask for more information about the data? Has someone else used this data for a purpose similar to mine?
How do I debug unexpected data issues? Can upstream data changes explain such issues?
How do I set garbage collection policies for data I generate periodically?
In a couple of posts following this one, I will provide my experience around each of these questions, and how it impacted my efficiency besides raising the motivation bar for working with new data.
1I worked at Google until 2012 and my experience is based on tools and technology before I left.