Posted by Jeff Dean, Google Senior Fellow, on behalf of the entire Google Brain team
The Google Brain team ‘s long-term goal is to create more intelligent software and systems that improve people’s lives, which we pursue through both pure and applied research in a variety of different domains. And while this is obviously a long-term goal, we would like to take a step back and look at some of the progress our team has made over the past year, and share what we feel may be in store for 2017.
Research Publications
One important way in which we assess the quality of our research is through publications in top tier international machine learning venues like ICML , NIPS , and ICLR . Last year our team had a total of 27 accepted papers at these venues, covering a wide ranging set of topics including program synthesis , knowledge transfer from one network to another , distributed training of machine learning models , generative models for language , unsupervised learning for robotics , automated theorem proving , better theoretical understanding of neural networks , algorithms for improved reinforcement learning , and many others. We also had numerous other papers accepted at conferences in fields such as natural language processing ( ACL , CoNNL ), speech ( ICASSP ), vision ( CVPR ), robotics ( ISER ), and computer systems ( OSDI ). Our group has also submitted 34 papers to the upcoming ICLR 2017 , a top venue for cutting-edge deep learning research. You can learn more about our work in our list of papers, here .
Natural Language Understanding
Allowing computers to better understand human language is one key area for our research. In late 2014, three Brain team researchers published a paper on Sequence to Sequence Learning with Neural Networks , and demonstrated that the approach could be used for machine translation. In 2015, we showed that this this approach could also be used for generating captions for images , parsing sentences , and solving computational geometry problems . In 2016, this previous research (plus many enhancements) culminated in Brain team members worked closely with members of the Google Translate team to wholly replace the translation algorithms powering Google Translate with a completely end-to-end learned system ( research paper ). This new system closed the gap between the old system and human quality translations by up to 85% for some language pairs. A few weeks later, we showed how the system could do “ zero-shot translation ”, learning to translate between languages for which it had never seen example sentence pairs ( research paper ). This system is now deployed on the production Google Translate service for a growing number of language pairs, giving our users higher quality translations and allowing people to communicate more effectively across language barriers. Gideon Lewis-Kraus documented this translation effort (along with the history of deep learning and the history of the Google Brain team) in “ The Great A.I. Awakening ”, an in-depth article that appeared in The NY Times Magazine in December, 2016.
Robotics
Traditional robotics control algorithms are carefully and painstakingly hand-programmed, and therefore embodying robots with new capabilities is often a very laborious process. We believe that having robots automatically learn to acquire new skills through machine learning is a better approach. Last year, we collaborated with researchers at [X] to demonstrate how robotic arms could learn hand-eye coordination , pooling their experiences to teach themselves more quickly ( research paper ). Our robots made about 800,000 grasping attempts during this research. Later in the year, we explored three possible ways for robots to learn new skills , through reinforcement learning, through their own interaction with objects, and through human demonstrations. We’re continuing to build on this work in our goals for making robots that are able to flexibly and readily learn new tasks and operate in messy, real-world environments. To help other robotics researchers, we have made multiple robotics datasets publicly available .
Healthcare
We are excited by the potential to use machine learning to augment the abilities of doctors and healthcare practitioners. As just one example of the possibilities, in a paper published in the Journal of the American Medical Association ( JAMA ), we demonstrated that a machine-learning driven system for diagnosing diabetic retinopathy from a retinal image could perform on-par with board-certified ophthalmologists. With more than 400 million people at risk for blindness if early symptoms of diabetic retinopathy go undetected, but too few ophthalmologists to perform the necessary screening in many countries, this technology could help ensure that more people receive the proper screening. We are also doing work in other medical imaging domains, as well as investigating the use of machine learning for other kinds of medical prediction tasks. We believe that machine learning can improve the quality and efficiency of the healthcare experience for doctors and patients , and we’ll have more to say about our work in this area in 2017.
Music and Art Generation
Technology has always helped define how people create and share media — consider the printing press, film or the electric guitar. Last year we started a project called Magenta to explore the intersection of art and machine intelligence , and the potential of using machine learning systems to augment human creativity. Starting with music and image generation and moving to areas like text generation and VR, Magenta is advancing the state-of-the-art in generative models for content creation. We’ve helped to organize a one-day symposium on these topics and supported an art exhibition of machine generated art . We’ve explored a variety of topics in music generation and artistic style transfer , and our jam session demo won the Best Demo Award at NIPS 2016 .
AI Safety and Fairness
As we develop more powerful and sophisticated AI systems and deploy them in a wider variety of real-world settings, we want to ensure that these systems are both safe and fair, and we also want to build tools to help humans better understand the output they produce. In the area of AI safety, in a cross-institutional collaboration with researchers at Stanford, Berkeley, and OpenAI, we published a white paper on Concrete Problems in AI Safety (see the blog post here ). The paper outlines some specific problems and areas where we believe there is real and foundational research to be done in the area of AI safety. One aspect of safety on which we are making progress is the protection of the privacy of training data, obtaining differential privacy guarantees , most recently via knowledge transfer techniques . In addition to safety, as we start to rely on AI systems to make more complex and sophisticated decisions, we want to ensure that those decisions are fair. In a paper on equality of opportunity in supervised learning (see the blog post here ), we showed how to optimally adjust any trained predictor to prevent one particular formal notion of discrimination, and the paper illustrated this with a case study based on FICO credit scores. To make this work more accessible, we also created a visualization to help illustrate and interactively explore the concepts from the paper .
TensorFlow
In November 2015, we open-sourced an initial version of TensorFlow so that the rest of the machine learning community could benefit from it and we could all collaborate to jointly improve it. In 2016, TensorFlow became the most popular machine learning project on GitHub , with over 10,000 commits by more than 570 people. TensorFlow’s repository of models has grown with contributions from the community, and there are also more than 5000 TensorFlow-related repositories listed on GitHub alone! Furthermore, TensorFlow has been widely adopted by well-known research groups and large companies including DeepMind , and applied towards or some unusual applications like finding sea cows Down Under and sorting cucumbers in Japan .
We’ve made numerous performance improvements , added support for distributed training , brought TensorFlow to iOS , Raspberry Pi and Windows , and integrated TensorFlow with widely-used big data infrastructure . We’ve extended TensorBoard , TensorFlow’s visualization system with improved tools for visualizing computation graphs and embeddings . We’ve also made TensorFlow accessible from Go , Rust and Haskell , released state-of-the-art image classification models , Wide and Deep and answered thousands of questions on GitHub , StackOverflow and the TensorFlow mailing list along the way. TensorFlow Serving simplifies the process of serving TensorFlow models in production, and for those working in the cloud, Google Cloud Machine Learning offers TensorFlow as a managed service.
Last November, we celebrated TensorFlow’s one year anniversary as an open-source project , and presented a paper on the computer systems aspects of TensorFlow at OSDI , one of the premier computer systems research conferences. In collaboration with our colleagues in the compiler team at Google we’ve also been hard at work on a backend compiler for TensorFlow called XLA , an alpha version of which was recently added to the open-source release .
Machine Learning Community Involvement
We also strive to educate and mentor people in how to do machine learning and how to conduct research in this field. Last January, Vincent Vanhoucke, one of the research leads in the Brain team, developed and worked with Udacity to make available a free online deep learning course ( blog announcement ). We also put together TensorFlow Playground , a fun and interactive system to help people better understand and visualize how very simple neural networks learn to accomplish tasks.
In June we welcomed our first class of 27 Google Brain Residents , selected from more than 2200 applicants, and in seven months they have already conducted significantly original research, helping to author 21 research papers . In August, many Brain team members took part in a Google Brain team Reddit AMA (Ask Me Anything) on r/MachineLearning to answer the community’s questions about machine learning and our team. Throughout the year, we also hosted 46 student interns (mostly Ph.D. students) in our group to conduct research and work with our team members.
Spreading Machine Learning within Google
In addition to the public-facing activities outlined above, we have continued to work within Google to spread machine learning expertise and awareness throughout our many product teams, and to ensure that the company as a whole is well positioned to take advantage of any new machine learning research that emerges. As one example, we worked closely with our platforms team to provide specifications and high level goals for Google’s Tensor Processing Unit (TPU), a custom machine learning accelerator ASIC that was discussed at Google I/O . This custom chip provides an order of magnitude improvement for machine learning workloads, and is heavily used throughout our products, including for RankBrain , for the recently launched Neural Machine Translation system , and for the AlphaGo match against Lee Sedol in Korea last March.
All in all, 2016 was an exciting year for the Google Brain team and our many collaborators and colleagues both within and outside of Google, and we look forward to our machine learning research having significant impact in 2017!
Source: The Google Brain team — Looking Back on 2016
除非特别声明,此文章内容采用 知识共享署名 3.0 许可,代码示例采用 Apache 2.0 许可。更多细节请查看我们的 服务条款 。
Post Views: 0