Googland |
Posted: 30 Jan 2011 03:00 PM PST Official Google Research Blog: Google at NIPS 2010Posted by Slav Petrov, Doug Aberdeen, and Lisa McCracken, Google ResearchThe machine learning community met in Vancouver in December for the 24th Neural Information Processing Systems Conference (NIPS). As always, the single-track program of the main conference featured a number of outstanding talks, followed by interesting late night poster sessions. A record number of workshops covered a wide variety of topics, while allocating sufficient time for skiing in Whistler - after all, many of the most interesting research conversations happen while riding the lift in-between ski runs. This year's conference also featured a symposium dedicated to Sam Roweis, providing a retrospective on Sam's life and work. Sam, a fellow Googler and professor at NYU, was at the heart of the NIPS community and is terribly missed. As always, Google was involved in various ways with NIPS. Here at Google, we take a data-driven approach when solving problems. Therefore, Machine Learning is in one way or another at the core of most of the things that we do. It is therefore unsurprising that many Googlers helped shape the program of the conference or were in the audience. This year, three Googlers served as area chairs and even more were reviewers. Googlers also co-authored the following papers:
Additionally, Googlers co-organized three well attended workshops:
Finally, Yoram Singer gave a great talk on Learning Structural Sparsity at the Sam Roweis symposium and Googlers presented the following talks during the workshops:
Overall, it was a very successful conference and it was good to be back in Vancouver one last time. This coming year NIPS 2011 will be in Granada, Spain. Hasta luego! URL: http://googleresearch.blogspot.com/2011/01/google-at-nips-2010.html |
[G] More Google Contributions to the Broader Scientific Community Posted: 30 Jan 2011 03:00 PM PST Official Google Research Blog: More Google Contributions to the Broader Scientific CommunityPosted by Corinna Cortes and Alfred Spector, Google ResearchGooglers actively engage with the scientific community by publishing technical papers, contributing open-source packages, working on standards, introducing new APIs and tools, giving talks and presentations, participating in ongoing technical debates, and much more. Our publications offer technical and algorithmic advances, demonstrate things we learn as we develop novel products and services, and shed light on some of the technical challenges we face at Google. In an effort to highlight some of our work, we periodically select a number of publications to be featured on this site. We first posted a set of papers on this blog in mid-2010 and subsequently discussed them in more detail in the following blog postings. This blog posting highlights a few new noteworthy papers authored or co-authored by Googlers from the later half of 2010. In the coming weeks we will be offering a more in-depth look at these publications, but here are some summaries: Algorithms and Electronic Commerce Robust Mechanisms for Risk-Averse Sellers ACM Conference on Electronic Commerce (EC) Mukund Sundararajan and Qiqi Yan, Stanford University In his seminal Nobel prize-winning work, Roger Myerson identified the revenue-maximizing auction for a risk-neutral auctioneer. In contrast, this work identifies good mechanisms for risk-averse auctioneers. These mechanisms trade a little revenue for better certainty, in the best possible way. We expect this work will help guide reserve-price selection in auctions where auctioneers/sellers want better control over their revenue. Monitoring Algorithms for Negative Feedback Systems World Wide Web Conference (WWW) Mark Sandler and S. Muthukrishnan In negative feedback systems, users report abusive content at a site to its owner for consideration or removal, but the users might not be honest. For the site owners, this represents a trade-off between vetting such user reports by humans vs. accepting them without vetting. This paper presents a mathematical framework for design and analysis of such systems and presents algorithms with provably good trade-offs against malicious users. HCI Allison Druin, University of Maryland, Elizabeth Foss, University of Maryland, Hilary Hutchinson, Evan Golub, University of Maryland, and Leshell Hatley, University of Maryland In this paper, we describe seven search roles children display as information seekers using Internet keyword interfaces, based on a home study of 83 children ages 7, 9, and 11. Machine Learning Large Scale Image Annotation: Learning to Rank with Joint Word-Image Embeddings Jason Weston, Samy Bengio, and Nicolas Usunier, Universite Paris 6 - LIP6 In this paper, we introduce a generic framework to find a joint representation of images and their labels, which can then be used for various tasks, including image ranking and image annotation. We simultaneously propose an efficient training algorithm that scales to tens of millions of images and hundreds of thousands of labels, while focusing training on making good predictions at the top of the ranked list. The models are both fast at prediction time and have low memory usage making it possible to house such systems on a laptop or mobile device. Overlapping Experiment Infrastructure: More, Better Diane Tang, Ashish Agarwal, Deirdre O'Brien, and Mike Meyer Google's data driven culture requires running a large number of live traffic experiments. This paper describes Google's overlapping experimental infrastructure where a single event (e.g. a web search) can be assigned to multiple simultaneous large experiments. The infrastructure and supporting tools provide a framework that enables running experiments from design to decision making and launch, and can be generalized to many other web applications. NLP North American Chapter of the Association for Computational Linguistics (NAACL) Slav Petrov It is well known that the Expectation Maximization algorithm can converge to widely varying local maxima. This paper shows that this can be advantageous when learning latent variable grammars for syntactic parsing. By combining multiple state-of-the-art individual grammars into an unweighted product model, parsing accuracy can be improved from 90.2% to 91.8% for English, and from 80.3% to 84.5% for German. Software Engineering International Symposium on Code Generation and Optimization (CGO) Jason Mars, University of Virginia, Neil Vachharajani, Robert Hundt, Mary Lou Soffa, University of Virginia This paper makes a big step forward in addressing an important and pressing problem in the field of Computer Science today. This work presents a lightweight runtime solution that significantly improves the utilization of datacenter servers by up to 58% on average. This work also received the CGO 2010 Best Presentation Award. Speech Interspeech Maryam Kamvar and Doug Beeferman Say What? Have you been speaking your search queries into your mobile device rather than typing them? Spoken search is available on Android, iPhone and Blackberry devices and we see an increasing numbers of searches coming in by voice on these phones. In our paper "Say What: Why users choose to speak their web queries" we investigate, on an aggregate level, what factors are most predictive of spoken queries. Understanding context in which a speech-driven search is used (or conversely not used) can be used to improve recognition engines and spoken interface design. So, save keystrokes and say your query! Query Language Modeling for Voice Search IEEE Workshop on Spoken Language Technology Ciprian Chelba, Johan Schalkwyk, Thorsten Brants, Vida Ha, Boulos Harb, Will Neveitt, Carolina Parada*, Johns Hopkins University, and Peng Xu The paper describes language modeling for google.com query data, and its application to speech recognition for Google Voice Search. Our empirical findings include:
Structured Data Very Large Data Bases (VLDB) Sergey Melnik, Andrey Gubarev, Jing Jing Long, Geoffrey Romer, Shiva Shivakumar, Matt Tolton, and Theo Vassilakis, Google Inc. Dremel is a scalable, interactive ad-hoc query system. By combining multi-level execution trees and columnar data layout, it is capable of running aggregation queries over trillion-row tables in seconds. The system is widely used at Google and serves as the foundational technology behind BigQuery, a product launched in limited preview mode. Systems and Infrastructure USENIX Symposium on Operating Systems Design and Implementation (OSDI) Daniel Peng and Frank Dabek In the past, Google accumulated a whole day's worth of changes to the web and ran a series of enormous MapReduces to apply this batch of changes to our index of the web. This system led to a delay of several days between crawling a document and presenting it to users in search results. To meet our goal of reducing the indexing delay to minutes, we needed to update the index as each individual document was crawled, rather than in daily batches. No existing infrastructure supported this kind of incremental transformation at web scale, so we built Percolator: a framework for transforming a large repository using small ACID transactions. Availability in Globally Distributed Storage Systems USENIX Symposium on Operating Systems Design and Implementation (OSDI) Daniel Ford, Francois Labelle, Florentina Popovici, Murray Stokely, Van-Anh Truong*, Columbia University, Luiz Barroso, Carrie Grimes, and Sean Quinlan In our paper, we characterize the availability of cloud storage systems, based on extensive monitoring of Google's main storage infrastructure, and the sources of failure which affect availability. We also present statistical models for reasoning about the impact of design choices such as data placement, recovery speed, and replication strategies, including replication across multiple data centers. Vision IEEE International Conference on Data Mining (ICDM) Sergey Ioffe With the huge amounts of very high-dimensional data, such as images and videos, we frequently need to "sketch" the data -- that is, represent it in a much more compact form, while still allowing us to accurately determine how different any two images or videos are. In this paper, we describe a sketching method for L1, one of the most common distance measures. It works by first hashing the data with a new algorithm, and then compressing each hash to a small number of bits, which is learned from data. This method is fast and allows the distances to be estimated accurately, while reducing the storage requirements by a factor of 100. *) work carried out while at Google URL: http://googleresearch.blogspot.com/2011/01/more-google-contributions-to-broader.html |
You are subscribed to email updates from Googland To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment