fbpx
Wikipedia

Google Brain

Google Brain was a deep learning artificial intelligence research team under the umbrella of Google AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, Google Brain combined open-ended machine learning research with information systems and large-scale computing resources.[1] The team has created tools such as TensorFlow, which allow for neural networks to be used by the public, with multiple internal AI research projects.[2] The team aims to create research opportunities in machine learning and natural language processing.[2] The team was merged into former Google sister company DeepMind to form Google DeepMind in April 2023.

Google Brain
TypeArtificial intelligence and machine learning
FounderGreg S. Corrado
Jeff Dean 
DefunctApril 2023
SuccessorGoogle DeepMind
HeadquartersMountain View, California
Websiteai.google/brain-team/

History edit

The Google Brain project began in 2011 as a part-time research collaboration between Google fellow Jeff Dean, Google Researcher Greg Corrado, and Stanford University professor Andrew Ng.[3] Ng had been interested in using deep learning techniques to crack the problem of artificial intelligence since 2006, and in 2011 began collaborating with Dean and Corrado to build a large-scale deep learning software system, DistBelief,[4] on top of Google's cloud computing infrastructure. Google Brain started as a Google X project and became so successful that it was graduated back to Google: Astro Teller has said that Google Brain paid for the entire cost of Google X.[5]

In June 2012, the New York Times reported that a cluster of 16,000 processors in 1,000 computers dedicated to mimicking some aspects of human brain activity had successfully trained itself to recognize a cat based on 10 million digital images taken from YouTube videos.[3] The story was also covered by National Public Radio.[6]

In March 2013, Google hired Geoffrey Hinton, a leading researcher in the deep learning field, and acquired the company DNNResearch Inc. headed by Hinton. Hinton said that he would be dividing his future time between his university research and his work at Google.[7]

In April 2023, Google Brain merged with Google sister company DeepMind to form Google DeepMind, as part of the company's continued efforts to accelerate work on AI.[8]

Team and location edit

Google Brain was initially established by Google Fellow Jeff Dean and visiting Stanford professor Andrew Ng. In 2014, the team included Jeff Dean, Quoc Le, Ilya Sutskever, Alex Krizhevsky, Samy Bengio, and Vincent Vanhoucke. In 2017, team members included Anelia Angelova, Samy Bengio, Greg Corrado, George Dahl, Michael Isard, Anjuli Kannan, Hugo Larochelle, Chris Olah, Salih Edneer, Benoit Steiner, Vincent Vanhoucke, Vijay Vasudevan, and Fernanda Viegas.[9] Chris Lattner, who created Apple's programming language Swift and then ran Tesla's autonomy team for six months, joined Google Brain's team in August 2017.[10] Lattner left the team in January 2020 and joined SiFive.[11]

As of 2021, Google Brain was led by Jeff Dean, Geoffrey Hinton, and Zoubin Ghahramani. Other members include Katherine Heller, Pi-Chuan Chang, Ian Simon, Jean-Philippe Vert, Nevena Lazic, Anelia Angelova, Lukasz Kaiser, Carrie Jun Cai, Eric Breck, Ruoming Pang, Carlos Riquelme, Hugo Larochelle, and David Ha.[9] Samy Bengio left the team in April 2021,[12] and Zoubin Ghahramani took on his responsibilities.

Google Research includes Google Brain and is based in Mountain View, California. It also has satellite groups in Accra, Amsterdam, Atlanta, Beijing, Berlin, Cambridge (Massachusetts), Israel, Los Angeles, London, Montreal, Munich, New York City, Paris, Pittsburgh, Princeton, San Francisco, Seattle, Tokyo, Toronto, and Zürich.[13]

Projects edit

Artificial-intelligence-devised encryption system edit

In October 2016, Google Brain designed an experiment to determine that neural networks are capable of learning secure symmetric encryption.[14] In this experiment, three neural networks were created: Alice, Bob and Eve.[15] Adhering to the idea of a generative adversarial network (GAN), the goal of the experiment was for Alice to send an encrypted message to Bob that Bob could decrypt, but the adversary, Eve, could not.[15] Alice and Bob maintained an advantage over Eve, in that they shared a key used for encryption and decryption.[14] In doing so, Google Brain demonstrated the capability of neural networks to learn secure encryption.[14]

Image enhancement edit

In February 2017, Google Brain determined a probabilistic method for converting pictures with 8x8 resolution to a resolution of 32x32.[16][17] The method built upon an already existing probabilistic model called pixelCNN to generate pixel translations.[18][19]

The proposed software utilizes two neural networks to make approximations for the pixel makeup of translated images.[17][20] The first network, known as the "conditioning network," downsizes high-resolution images to 8x8 and attempts to create mappings from the original 8x8 image to these higher-resolution ones.[17] The other network, known as the "prior network," uses the mappings from the previous network to add more detail to the original image.[17] The resulting translated image is not the same image in higher resolution, but rather a 32x32 resolution estimation based on other existing high-resolution images.[17] Google Brain's results indicate the possibility for neural networks to enhance images.[21]

Google Translate edit

The Google Brain team contributed to the Google Translate project by employing a new deep learning system that combines artificial neural networks with vast databases of multilingual texts.[22] In September 2016, Google Neural Machine Translation (GNMT) was launched, an end-to-end learning framework, able to learn from a large number of examples.[22] Previously, Google Translate's Phrase-Based Machine Translation (PBMT) approach would statistically analyze word by word and try to match corresponding words in other languages without considering the surrounding phrases in the sentence.[23] But rather than choosing a replacement for each individual word in the desired language, GNMT evaluates word segments in the context of the rest of the sentence to choose more accurate replacements.[2] Compared to older PBMT models, the GNMT model scored a 24% improvement in similarity to human translation, with a 60% reduction in errors.[2][22] The GNMT has also shown significant improvement for notoriously difficult translations, like Chinese to English.[22]

While the introduction of the GNMT has increased the quality of Google Translate's translations for the pilot languages, it was very difficult to create such improvements for all of its 103 languages. Addressing this problem, the Google Brain Team was able to develop a Multilingual GNMT system, which extended the previous one by enabling translations between multiple languages. Furthermore, it allows for Zero-Shot Translations, which are translations between two languages that the system has never explicitly seen before.[24] Google announced that Google Translate can now also translate without transcribing, using neural networks. This means that it is possible to translate speech in one language directly into text in another language, without first transcribing it to text.

According to the Researchers at Google Brain, this intermediate step can be avoided using neural networks. In order for the system to learn this, they exposed it to many hours of Spanish audio together with the corresponding English text. The different layers of neural networks, replicating the human brain, were able to link the corresponding parts and subsequently manipulate the audio waveform until it was transformed to English text.[25] Another drawback of the GNMT model is that it causes the time of translation to increase exponentially with the number of words in the sentence.[2] This caused the Google Brain Team to add 2000 more processors to ensure the new translation process would still be fast and reliable.[23]

Robotics edit

Aiming to improve traditional robotics control algorithms where new skills of a robot need to be hand-programmed, robotics researchers at Google Brain are developing machine learning techniques to allow robots to learn new skills on their own.[26] They also attempt to develop ways for information sharing between robots so that robots can learn from each other during their learning process, also known as cloud robotics.[27] As a result, Google has launched the Google Cloud Robotics Platform for developers in 2019, an effort to combine robotics, AI, and the cloud to enable efficient robotic automation through cloud-connected collaborative robots.[27]

Robotics research at Google Brain has focused mostly on improving and applying deep learning algorithms to enable robots to complete tasks by learning from experience, simulation, human demonstrations, and/or visual representations.[28][29][30][31] For example, Google Brain researchers showed that robots can learn to pick and throw rigid objects into selected boxes by experimenting in an environment without being pre-programmed to do so.[28] In another research, researchers trained robots to learn behaviors such as pouring liquid from a cup; robots learned from videos of human demonstrations recorded from multiple viewpoints.[30]

Google Brain researchers have collaborated with other companies and academic institutions on robotics research. In 2016, the Google Brain Team collaborated with researchers at X in a research on learning hand-eye coordination for robotic grasping.[32] Their method allowed real-time robot control for grasping novel objects with self-correction.[32] In 2020, researchers from Google Brain, Intel AI Lab, and UC Berkeley created an AI model for robots to learn surgery-related tasks such as suturing from training with surgery videos.[31]

Interactive Speaker Recognition with Reinforcement Learning edit

In 2020, Google Brain Team and University of Lille presented a model for automatic speaker recognition which they called Interactive Speaker Recognition. The ISR module recognizes a speaker from a given list of speakers only by requesting a few user specific words.[33] The model can be altered to choose speech segments in the context of Text-To-Speech Training.[33] It can also prevent malicious voice generators from accessing the data.[33]

TensorFlow edit

TensorFlow is an open source software library powered by Google Brain that allows anyone to utilize machine learning by providing the tools to train one's own neural network.[2] The tool has been used to develop software using deep learning models that farmers use to reduce the amount of manual labor required to sort their yield, by training it with a data set of human-sorted images.[2]

Magenta edit

Magenta is a project that uses Google Brain to create new information in the form of art and music rather than classify and sort existing data.[2] TensorFlow was updated with a suite of tools for users to guide the neural network to create images and music.[2] However, the team from Valdosta State University found that the AI struggles to perfectly replicate human intention in artistry, similar to the issues faced in translation.[2]

Medical applications edit

The image sorting capabilities of Google Brain have been used to help detect certain medical conditions by seeking out patterns that human doctors may not notice to provide an earlier diagnosis.[2] During screening for breast cancer, this method was found to have one quarter the false positive rate of human pathologists, who require more time to look over each photo and cannot spend their entire focus on this one task.[2] Due to the neural network's very specific training for a single task, it cannot identify other afflictions present in a photo that a human could easily spot.[2]

Text-to-image model edit

Google Brain announced in 2022 that it created two different types of text-to-image models called Imagen and Parti that compete with OpenAI's DALL-E.[34][35]

Later in 2022, the project was extended to text-to-video.[36]

Other Google products edit

The Google Brain projects' technology is currently used in various other Google products such as the Android Operating System's speech recognition system, photo search for Google Photos, smart reply in Gmail, and video recommendations in YouTube.[37][38][39]

Reception edit

Google Brain has received coverage in Wired,[40][41][42] NPR,[6] and Big Think.[43] These articles have contained interviews with key team members Ray Kurzweil and Andrew Ng, and focus on explanations of the project's goals and applications.[40][6][43]

Controversies edit

In December 2020, AI ethicist Timnit Gebru left Google.[44] While the exact nature of her quitting or being fired is disputed, the cause of the departure was her refusal to retract a paper entitled "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?"[44] This paper explored potential risks of the growth of AI such as Google Brain, including environmental impact, biases in training data, and the ability to deceive the public.[44][45] The request to retract the paper was made by Megan Kacholia, vice president of Google Brain.[46] As of April 2021, nearly 7000 current or former Google employees and industry supporters have signed an open letter accusing Google of "research censorship" and condemning Gebru's treatment at the company.[47]

In February 2021, Google fired one of the leaders of the company's AI ethics team, Margaret Mitchell.[46] The company's statement alleged that Mitchell had broken company policy by using automated tools to find support for Gebru.[46] In the same month, engineers outside the ethics team began to quit, citing the termination of Gebru as their reason for leaving.[48] In April 2021, Google Brain co-founder Samy Bengio announced his resignation from the company.[12] Despite being Gebru's manager, Bengio was not notified before her termination, and he posted online in support of both her and Mitchell.[12] While Bengio's announcement focused on personal growth as his reason for leaving, anonymous sources indicated to Reuters that the turmoil within the AI ethics team played a role in his considerations.[12]

In March 2022, Google fired AI researcher Satrajit Chatterjee after he questioned the findings of a paper published in Nature, by Google's AI team members, Anna Goldie and Azalia Mirhoseini, about their findings on the ability of computers to design computer chip components.[49][50]

See also edit

References edit

  1. ^ "What is Google Brain?". GeeksforGeeks. 2020-02-06. Retrieved 2021-04-09.
  2. ^ a b c d e f g h i j k l m Helms, Mallory; Ault, Shaun V.; Mao, Guifen; Wang, Jin (2018-03-09). "An Overview of Google Brain and Its Applications". Proceedings of the 2018 International Conference on Big Data and Education. ICBDE '18. Honolulu, HI, USA: Association for Computing Machinery. pp. 72–75. doi:10.1145/3206157.3206175. ISBN 978-1-4503-6358-7. S2CID 44107806.
  3. ^ a b Markoff, John (June 25, 2012). "How Many Computers to Identify a Cat? 16,000". The New York Times. Retrieved February 11, 2014.
  4. ^ Jeffrey Dean; et al. (December 2012). "Large Scale Distributed Deep Networks" (PDF). Retrieved 25 October 2015.
  5. ^ Conor Dougherty (16 February 2015). "Astro Teller, Google's 'Captain of Moonshots,' on Making Profits at Google X". Retrieved 25 October 2015.
  6. ^ a b c "A Massive Google Network Learns To Identify — Cats". National Public Radio. June 26, 2012. Retrieved February 11, 2014.
  7. ^ "U of T neural networks start-up acquired by Google" (Press release). Toronto, ON. 12 March 2013. Retrieved 13 March 2013.
  8. ^ Roth, Emma; Peters, Jay (April 20, 2023). "Google's big AI push will combine Brain and DeepMind into one team". The Verge. from the original on April 20, 2023. Retrieved April 21, 2023.
  9. ^ a b "Brain Team – Google Research". Google Research. Retrieved 2021-04-08.
  10. ^ Etherington, Darrell (Aug 14, 2017). "Swift creator Chris Lattner joins Google Brain after Tesla Autopilot stint". TechCrunch. Retrieved 11 October 2017.
  11. ^ "Former Google and Tesla Engineer Chris Lattner to Lead SiFive Platform Engineering Team". www.businesswire.com. 2020-01-27. Retrieved 2021-04-09.
  12. ^ a b c d Dave, Jeffrey Dastin, Paresh (2021-04-07). "Google AI scientist Bengio resigns after colleagues' firings: email". Reuters. Retrieved 2021-04-08.{{cite news}}: CS1 maint: multiple names: authors list (link)
  13. ^ "Build for Everyone – Google Careers". careers.google.com. Retrieved 2021-04-08.
  14. ^ a b c Zhu, Y.; Vargas, D. V.; Sakurai, K. (November 2018). "Neural Cryptography Based on the Topology Evolving Neural Networks". 2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW). pp. 472–478. doi:10.1109/CANDARW.2018.00091. ISBN 978-1-5386-9184-7. S2CID 57192497.
  15. ^ a b Abadi, Martín; Andersen, David G. (2016). "Learning to Protect Communications with Adversarial Neural Cryptography". arXiv:1610.06918. Bibcode:2016arXiv161006918A. {{cite journal}}: Cite journal requires |journal= (help)
  16. ^ Dahl, Ryan; Norouzi, Mohammad; Shlens, Jonathon (2017). "Pixel Recursive Super Resolution". arXiv:1702.00783. Bibcode:2017arXiv170200783D. {{cite journal}}: Cite journal requires |journal= (help)
  17. ^ a b c d e "Google Brain super-resolution image tech makes "zoom, enhance!" real". arstechnica.co.uk. 2017-02-07. Retrieved 2017-05-15.
  18. ^ Bulat, Adrian; Yang, Jing; Tzimiropoulos, Georgios (2018), "To Learn Image Super-Resolution, Use a GAN to Learn How to do Image Degradation First", Computer Vision – ECCV 2018, Lecture Notes in Computer Science, Cham: Springer International Publishing, vol. 11210, pp. 187–202, arXiv:1807.11458, doi:10.1007/978-3-030-01231-1_12, ISBN 978-3-030-01230-4, S2CID 51882734, retrieved 2021-04-09
  19. ^ Oord, Aaron Van; Kalchbrenner, Nal; Kavukcuoglu, Koray (2016-06-11). "Pixel Recurrent Neural Networks". International Conference on Machine Learning. PMLR: 1747–1756. arXiv:1601.06759.
  20. ^ "Google uses AI to sharpen low-res images". engadget.com. Retrieved 2017-05-15.
  21. ^ "Google just made 'zoom and enhance' a reality – kinda". cnet.com. Retrieved 2017-05-15.
  22. ^ a b c d Castelvecchi, Davide (2016). "Deep learning boosts Google Translate tool". Nature News. doi:10.1038/nature.2016.20696. S2CID 64308242.
  23. ^ a b Lewis-Kraus, Gideon (2016-12-14). "The Great A.I. Awakening". The New York Times. ISSN 0362-4331. Retrieved 2021-04-08.
  24. ^ Johnson, Melvin; Schuster, Mike; Le, Quoc V.; Krikun, Maxim; Wu, Yonghui; Chen, Zhifeng; Thorat, Nikhil; Viégas, Fernanda; Wattenberg, Martin; Corrado, Greg; Hughes, Macduff (2017-10-01). "Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation". Transactions of the Association for Computational Linguistics. 5: 339–351. doi:10.1162/tacl_a_00065. ISSN 2307-387X.
  25. ^ Reynolds, Matt. "Google uses neural networks to translate without transcribing". New Scientist. Retrieved 15 May 2017.
  26. ^ Metz, Cade; Dawson, Brian; Felling, Meg (2019-03-26). "Inside Google's Rebooted Robotics Program". The New York Times. ISSN 0362-4331. Retrieved 2021-04-08.
  27. ^ a b "Google Cloud Robotics Platform coming to developers in 2019". The Robot Report. 2018-10-24. Retrieved 2021-04-08.
  28. ^ a b Zeng, A.; Song, S.; Lee, J.; Rodriguez, A.; Funkhouser, T. (August 2020). "TossingBot: Learning to Throw Arbitrary Objects With Residual Physics". IEEE Transactions on Robotics. 36 (4): 1307–1319. doi:10.1109/TRO.2020.2988642. ISSN 1941-0468.
  29. ^ Gu, S.; Holly, E.; Lillicrap, T.; Levine, S. (May 2017). "Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates". 2017 IEEE International Conference on Robotics and Automation (ICRA). pp. 3389–3396. arXiv:1610.00633. doi:10.1109/ICRA.2017.7989385. ISBN 978-1-5090-4633-1. S2CID 18389147.
  30. ^ a b Sermanet, P.; Lynch, C.; Chebotar, Y.; Hsu, J.; Jang, E.; Schaal, S.; Levine, S.; Brain, G. (May 2018). "Time-Contrastive Networks: Self-Supervised Learning from Video". 2018 IEEE International Conference on Robotics and Automation (ICRA). pp. 1134–1141. arXiv:1704.06888. doi:10.1109/ICRA.2018.8462891. ISBN 978-1-5386-3081-5. S2CID 3997350.
  31. ^ a b Tanwani, A. K.; Sermanet, P.; Yan, A.; Anand, R.; Phielipp, M.; Goldberg, K. (May 2020). "Motion2Vec: Semi-Supervised Representation Learning from Surgical Videos". 2020 IEEE International Conference on Robotics and Automation (ICRA). pp. 2174–2181. arXiv:2006.00545. doi:10.1109/ICRA40945.2020.9197324. ISBN 978-1-7281-7395-5. S2CID 219176734.
  32. ^ a b Levine, Sergey; Pastor, Peter; Krizhevsky, Alex; Ibarz, Julian; Quillen, Deirdre (2018-04-01). "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection". The International Journal of Robotics Research. 37 (4–5): 421–436. doi:10.1177/0278364917710318. ISSN 0278-3649.
  33. ^ a b c Seurin, Mathieu; Strub, Florian; Preux, Philippe; Pietquin, Olivier (2020-10-25). "A Machine of Few Words: Interactive Speaker Recognition with Reinforcement Learning". Interspeech 2020. ISCA: ISCA: 4323–4327. arXiv:2008.03127. doi:10.21437/interspeech.2020-2892. S2CID 221083446.
  34. ^ Vincent, James (May 24, 2022). "All these images were generated by Google's latest text-to-image AI". The Verge. Vox Media. Retrieved May 28, 2022.
  35. ^ Khan, Imad. "Google's Parti Generator Relies on 20 Billion Inputs to Create Photorealistic Images". CNET. Retrieved 23 June 2022.
  36. ^ Edwards, Benj (2022-10-05). "Google's newest AI generator creates HD video from text prompts". Ars Technica. Retrieved 2022-12-28.
  37. ^ "How Google Retooled Android With Help From Your Brain". Wired. ISSN 1059-1028. Retrieved 2021-04-08.
  38. ^ "Google Open-Sources The Machine Learning Tech Behind Google Photos Search, Smart Reply And More". TechCrunch. 9 November 2015. Retrieved 2021-04-08.
  39. ^ "This Is Google's Plan to Save YouTube". Time. May 18, 2015.
  40. ^ a b Levy, Steven (April 25, 2013). "How Ray Kurzweil Will Help Google Make the Ultimate AI Brain". Wired. Retrieved February 11, 2014.
  41. ^ Wohlsen, Marcus (January 27, 2014). "Google's Grand Plan to Make Your Brain Irrelevant". Wired. Retrieved February 11, 2014.
  42. ^ Hernandez, Daniela (May 7, 2013). "The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI". Wired. Retrieved February 11, 2014.
  43. ^ a b "Ray Kurzweil and the Brains Behind the Google Brain". Big Think. December 8, 2013. Retrieved February 11, 2014.
  44. ^ a b c "We read the paper that forced Timnit Gebru out of Google. Here's what it says". MIT Technology Review. Retrieved 2021-04-08.
  45. ^ Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Shmitchell, Shmargaret (2021-03-03). "On the Dangers of Stochastic Parrots". Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Virtual Event Canada: ACM. pp. 610–623. doi:10.1145/3442188.3445922. ISBN 978-1-4503-8309-7.
  46. ^ a b c Schiffer, Zoe (2021-02-19). "Google fires second AI ethics researcher following internal investigation". The Verge. Retrieved 2021-04-08.
  47. ^ Change, Google Walkout For Real (2020-12-15). "Standing with Dr. Timnit Gebru — #ISupportTimnit #BelieveBlackWomen". Medium. Retrieved 2021-04-08. {{cite web}}: |first= has generic name (help)
  48. ^ Dave, Jeffrey Dastin, Paresh (2021-02-04). "Two Google engineers resign over firing of AI ethics researcher Timnit Gebru". Reuters. Retrieved 2021-04-08.{{cite news}}: CS1 maint: multiple names: authors list (link)
  49. ^ Wakabayashi, Daisuke; Metz, Cade (2022-05-02). "Another Firing Among Google's A.I. Brain Trust, and More Discord". The New York Times. ISSN 0362-4331. Retrieved 2022-06-12.
  50. ^ Simonite, Tom. "Tension Inside Google Over a Fired AI Researcher's Conduct". Wired. ISSN 1059-1028. Retrieved 2022-06-12.

google, brain, deep, learning, artificial, intelligence, research, team, under, umbrella, google, research, division, google, dedicated, artificial, intelligence, formed, 2011, combined, open, ended, machine, learning, research, with, information, systems, lar. Google Brain was a deep learning artificial intelligence research team under the umbrella of Google AI a research division at Google dedicated to artificial intelligence Formed in 2011 Google Brain combined open ended machine learning research with information systems and large scale computing resources 1 The team has created tools such as TensorFlow which allow for neural networks to be used by the public with multiple internal AI research projects 2 The team aims to create research opportunities in machine learning and natural language processing 2 The team was merged into former Google sister company DeepMind to form Google DeepMind in April 2023 Google BrainTypeArtificial intelligence and machine learningFounderGreg S CorradoJeff Dean DefunctApril 2023SuccessorGoogle DeepMindHeadquartersMountain View CaliforniaWebsiteai wbr google wbr brain team wbr Contents 1 History 2 Team and location 3 Projects 3 1 Artificial intelligence devised encryption system 3 2 Image enhancement 3 3 Google Translate 3 4 Robotics 3 5 Interactive Speaker Recognition with Reinforcement Learning 3 6 TensorFlow 3 7 Magenta 3 8 Medical applications 3 9 Text to image model 3 10 Other Google products 4 Reception 4 1 Controversies 5 See also 6 ReferencesHistory editThe Google Brain project began in 2011 as a part time research collaboration between Google fellow Jeff Dean Google Researcher Greg Corrado and Stanford University professor Andrew Ng 3 Ng had been interested in using deep learning techniques to crack the problem of artificial intelligence since 2006 and in 2011 began collaborating with Dean and Corrado to build a large scale deep learning software system DistBelief 4 on top of Google s cloud computing infrastructure Google Brain started as a Google X project and became so successful that it was graduated back to Google Astro Teller has said that Google Brain paid for the entire cost of Google X 5 In June 2012 the New York Times reported that a cluster of 16 000 processors in 1 000 computers dedicated to mimicking some aspects of human brain activity had successfully trained itself to recognize a cat based on 10 million digital images taken from YouTube videos 3 The story was also covered by National Public Radio 6 In March 2013 Google hired Geoffrey Hinton a leading researcher in the deep learning field and acquired the company DNNResearch Inc headed by Hinton Hinton said that he would be dividing his future time between his university research and his work at Google 7 In April 2023 Google Brain merged with Google sister company DeepMind to form Google DeepMind as part of the company s continued efforts to accelerate work on AI 8 Team and location editGoogle Brain was initially established by Google Fellow Jeff Dean and visiting Stanford professor Andrew Ng In 2014 the team included Jeff Dean Quoc Le Ilya Sutskever Alex Krizhevsky Samy Bengio and Vincent Vanhoucke In 2017 team members included Anelia Angelova Samy Bengio Greg Corrado George Dahl Michael Isard Anjuli Kannan Hugo Larochelle Chris Olah Salih Edneer Benoit Steiner Vincent Vanhoucke Vijay Vasudevan and Fernanda Viegas 9 Chris Lattner who created Apple s programming language Swift and then ran Tesla s autonomy team for six months joined Google Brain s team in August 2017 10 Lattner left the team in January 2020 and joined SiFive 11 As of 2021 update Google Brain was led by Jeff Dean Geoffrey Hinton and Zoubin Ghahramani Other members include Katherine Heller Pi Chuan Chang Ian Simon Jean Philippe Vert Nevena Lazic Anelia Angelova Lukasz Kaiser Carrie Jun Cai Eric Breck Ruoming Pang Carlos Riquelme Hugo Larochelle and David Ha 9 Samy Bengio left the team in April 2021 12 and Zoubin Ghahramani took on his responsibilities Google Research includes Google Brain and is based in Mountain View California It also has satellite groups in Accra Amsterdam Atlanta Beijing Berlin Cambridge Massachusetts Israel Los Angeles London Montreal Munich New York City Paris Pittsburgh Princeton San Francisco Seattle Tokyo Toronto and Zurich 13 Projects editArtificial intelligence devised encryption system edit In October 2016 Google Brain designed an experiment to determine that neural networks are capable of learning secure symmetric encryption 14 In this experiment three neural networks were created Alice Bob and Eve 15 Adhering to the idea of a generative adversarial network GAN the goal of the experiment was for Alice to send an encrypted message to Bob that Bob could decrypt but the adversary Eve could not 15 Alice and Bob maintained an advantage over Eve in that they shared a key used for encryption and decryption 14 In doing so Google Brain demonstrated the capability of neural networks to learn secure encryption 14 Image enhancement edit In February 2017 Google Brain determined a probabilistic method for converting pictures with 8x8 resolution to a resolution of 32x32 16 17 The method built upon an already existing probabilistic model called pixelCNN to generate pixel translations 18 19 The proposed software utilizes two neural networks to make approximations for the pixel makeup of translated images 17 20 The first network known as the conditioning network downsizes high resolution images to 8x8 and attempts to create mappings from the original 8x8 image to these higher resolution ones 17 The other network known as the prior network uses the mappings from the previous network to add more detail to the original image 17 The resulting translated image is not the same image in higher resolution but rather a 32x32 resolution estimation based on other existing high resolution images 17 Google Brain s results indicate the possibility for neural networks to enhance images 21 Google Translate edit The Google Brain team contributed to the Google Translate project by employing a new deep learning system that combines artificial neural networks with vast databases of multilingual texts 22 In September 2016 Google Neural Machine Translation GNMT was launched an end to end learning framework able to learn from a large number of examples 22 Previously Google Translate s Phrase Based Machine Translation PBMT approach would statistically analyze word by word and try to match corresponding words in other languages without considering the surrounding phrases in the sentence 23 But rather than choosing a replacement for each individual word in the desired language GNMT evaluates word segments in the context of the rest of the sentence to choose more accurate replacements 2 Compared to older PBMT models the GNMT model scored a 24 improvement in similarity to human translation with a 60 reduction in errors 2 22 The GNMT has also shown significant improvement for notoriously difficult translations like Chinese to English 22 While the introduction of the GNMT has increased the quality of Google Translate s translations for the pilot languages it was very difficult to create such improvements for all of its 103 languages Addressing this problem the Google Brain Team was able to develop a Multilingual GNMT system which extended the previous one by enabling translations between multiple languages Furthermore it allows for Zero Shot Translations which are translations between two languages that the system has never explicitly seen before 24 Google announced that Google Translate can now also translate without transcribing using neural networks This means that it is possible to translate speech in one language directly into text in another language without first transcribing it to text According to the Researchers at Google Brain this intermediate step can be avoided using neural networks In order for the system to learn this they exposed it to many hours of Spanish audio together with the corresponding English text The different layers of neural networks replicating the human brain were able to link the corresponding parts and subsequently manipulate the audio waveform until it was transformed to English text 25 Another drawback of the GNMT model is that it causes the time of translation to increase exponentially with the number of words in the sentence 2 This caused the Google Brain Team to add 2000 more processors to ensure the new translation process would still be fast and reliable 23 Robotics edit Aiming to improve traditional robotics control algorithms where new skills of a robot need to be hand programmed robotics researchers at Google Brain are developing machine learning techniques to allow robots to learn new skills on their own 26 They also attempt to develop ways for information sharing between robots so that robots can learn from each other during their learning process also known as cloud robotics 27 As a result Google has launched the Google Cloud Robotics Platform for developers in 2019 an effort to combine robotics AI and the cloud to enable efficient robotic automation through cloud connected collaborative robots 27 Robotics research at Google Brain has focused mostly on improving and applying deep learning algorithms to enable robots to complete tasks by learning from experience simulation human demonstrations and or visual representations 28 29 30 31 For example Google Brain researchers showed that robots can learn to pick and throw rigid objects into selected boxes by experimenting in an environment without being pre programmed to do so 28 In another research researchers trained robots to learn behaviors such as pouring liquid from a cup robots learned from videos of human demonstrations recorded from multiple viewpoints 30 Google Brain researchers have collaborated with other companies and academic institutions on robotics research In 2016 the Google Brain Team collaborated with researchers at X in a research on learning hand eye coordination for robotic grasping 32 Their method allowed real time robot control for grasping novel objects with self correction 32 In 2020 researchers from Google Brain Intel AI Lab and UC Berkeley created an AI model for robots to learn surgery related tasks such as suturing from training with surgery videos 31 Interactive Speaker Recognition with Reinforcement Learning edit In 2020 Google Brain Team and University of Lille presented a model for automatic speaker recognition which they called Interactive Speaker Recognition The ISR module recognizes a speaker from a given list of speakers only by requesting a few user specific words 33 The model can be altered to choose speech segments in the context of Text To Speech Training 33 It can also prevent malicious voice generators from accessing the data 33 TensorFlow edit Main article TensorFlow TensorFlow is an open source software library powered by Google Brain that allows anyone to utilize machine learning by providing the tools to train one s own neural network 2 The tool has been used to develop software using deep learning models that farmers use to reduce the amount of manual labor required to sort their yield by training it with a data set of human sorted images 2 Magenta edit Magenta is a project that uses Google Brain to create new information in the form of art and music rather than classify and sort existing data 2 TensorFlow was updated with a suite of tools for users to guide the neural network to create images and music 2 However the team from Valdosta State University found that the AI struggles to perfectly replicate human intention in artistry similar to the issues faced in translation 2 Medical applications edit The image sorting capabilities of Google Brain have been used to help detect certain medical conditions by seeking out patterns that human doctors may not notice to provide an earlier diagnosis 2 During screening for breast cancer this method was found to have one quarter the false positive rate of human pathologists who require more time to look over each photo and cannot spend their entire focus on this one task 2 Due to the neural network s very specific training for a single task it cannot identify other afflictions present in a photo that a human could easily spot 2 Text to image model edit Google Brain announced in 2022 that it created two different types of text to image models called Imagen and Parti that compete with OpenAI s DALL E 34 35 Later in 2022 the project was extended to text to video 36 Other Google products edit The Google Brain projects technology is currently used in various other Google products such as the Android Operating System s speech recognition system photo search for Google Photos smart reply in Gmail and video recommendations in YouTube 37 38 39 Reception editGoogle Brain has received coverage in Wired 40 41 42 NPR 6 and Big Think 43 These articles have contained interviews with key team members Ray Kurzweil and Andrew Ng and focus on explanations of the project s goals and applications 40 6 43 Controversies edit In December 2020 AI ethicist Timnit Gebru left Google 44 While the exact nature of her quitting or being fired is disputed the cause of the departure was her refusal to retract a paper entitled On the Dangers of Stochastic Parrots Can Language Models Be Too Big 44 This paper explored potential risks of the growth of AI such as Google Brain including environmental impact biases in training data and the ability to deceive the public 44 45 The request to retract the paper was made by Megan Kacholia vice president of Google Brain 46 As of April 2021 nearly 7000 current or former Google employees and industry supporters have signed an open letter accusing Google of research censorship and condemning Gebru s treatment at the company 47 In February 2021 Google fired one of the leaders of the company s AI ethics team Margaret Mitchell 46 The company s statement alleged that Mitchell had broken company policy by using automated tools to find support for Gebru 46 In the same month engineers outside the ethics team began to quit citing the termination of Gebru as their reason for leaving 48 In April 2021 Google Brain co founder Samy Bengio announced his resignation from the company 12 Despite being Gebru s manager Bengio was not notified before her termination and he posted online in support of both her and Mitchell 12 While Bengio s announcement focused on personal growth as his reason for leaving anonymous sources indicated to Reuters that the turmoil within the AI ethics team played a role in his considerations 12 In March 2022 Google fired AI researcher Satrajit Chatterjee after he questioned the findings of a paper published in Nature by Google s AI team members Anna Goldie and Azalia Mirhoseini about their findings on the ability of computers to design computer chip components 49 50 See also editArtificial intelligence art Glossary of artificial intelligence List of artificial intelligence projects Noosphere Quantum Artificial Intelligence Lab run by Google in collaboration with NASA and Universities Space Research AssociationReferences edit What is Google Brain GeeksforGeeks 2020 02 06 Retrieved 2021 04 09 a b c d e f g h i j k l m Helms Mallory Ault Shaun V Mao Guifen Wang Jin 2018 03 09 An Overview of Google Brain and Its Applications Proceedings of the 2018 International Conference on Big Data and Education ICBDE 18 Honolulu HI USA Association for Computing Machinery pp 72 75 doi 10 1145 3206157 3206175 ISBN 978 1 4503 6358 7 S2CID 44107806 a b Markoff John June 25 2012 How Many Computers to Identify a Cat 16 000 The New York Times Retrieved February 11 2014 Jeffrey Dean et al December 2012 Large Scale Distributed Deep Networks PDF Retrieved 25 October 2015 Conor Dougherty 16 February 2015 Astro Teller Google s Captain of Moonshots on Making Profits at Google X Retrieved 25 October 2015 a b c A Massive Google Network Learns To Identify Cats National Public Radio June 26 2012 Retrieved February 11 2014 U of T neural networks start up acquired by Google Press release Toronto ON 12 March 2013 Retrieved 13 March 2013 Roth Emma Peters Jay April 20 2023 Google s big AI push will combine Brain and DeepMind into one team The Verge Archived from the original on April 20 2023 Retrieved April 21 2023 a b Brain Team Google Research Google Research Retrieved 2021 04 08 Etherington Darrell Aug 14 2017 Swift creator Chris Lattner joins Google Brain after Tesla Autopilot stint TechCrunch Retrieved 11 October 2017 Former Google and Tesla Engineer Chris Lattner to Lead SiFive Platform Engineering Team www businesswire com 2020 01 27 Retrieved 2021 04 09 a b c d Dave Jeffrey Dastin Paresh 2021 04 07 Google AI scientist Bengio resigns after colleagues firings email Reuters Retrieved 2021 04 08 a href Template Cite news html title Template Cite news cite news a CS1 maint multiple names authors list link Build for Everyone Google Careers careers google com Retrieved 2021 04 08 a b c Zhu Y Vargas D V Sakurai K November 2018 Neural Cryptography Based on the Topology Evolving Neural Networks 2018 Sixth International Symposium on Computing and Networking Workshops CANDARW pp 472 478 doi 10 1109 CANDARW 2018 00091 ISBN 978 1 5386 9184 7 S2CID 57192497 a b Abadi Martin Andersen David G 2016 Learning to Protect Communications with Adversarial Neural Cryptography arXiv 1610 06918 Bibcode 2016arXiv161006918A a href Template Cite journal html title Template Cite journal cite journal a Cite journal requires journal help Dahl Ryan Norouzi Mohammad Shlens Jonathon 2017 Pixel Recursive Super Resolution arXiv 1702 00783 Bibcode 2017arXiv170200783D a href Template Cite journal html title Template Cite journal cite journal a Cite journal requires journal help a b c d e Google Brain super resolution image tech makes zoom enhance real arstechnica co uk 2017 02 07 Retrieved 2017 05 15 Bulat Adrian Yang Jing Tzimiropoulos Georgios 2018 To Learn Image Super Resolution Use a GAN to Learn How to do Image Degradation First Computer Vision ECCV 2018 Lecture Notes in Computer Science Cham Springer International Publishing vol 11210 pp 187 202 arXiv 1807 11458 doi 10 1007 978 3 030 01231 1 12 ISBN 978 3 030 01230 4 S2CID 51882734 retrieved 2021 04 09 Oord Aaron Van Kalchbrenner Nal Kavukcuoglu Koray 2016 06 11 Pixel Recurrent Neural Networks International Conference on Machine Learning PMLR 1747 1756 arXiv 1601 06759 Google uses AI to sharpen low res images engadget com Retrieved 2017 05 15 Google just made zoom and enhance a reality kinda cnet com Retrieved 2017 05 15 a b c d Castelvecchi Davide 2016 Deep learning boosts Google Translate tool Nature News doi 10 1038 nature 2016 20696 S2CID 64308242 a b Lewis Kraus Gideon 2016 12 14 The Great A I Awakening The New York Times ISSN 0362 4331 Retrieved 2021 04 08 Johnson Melvin Schuster Mike Le Quoc V Krikun Maxim Wu Yonghui Chen Zhifeng Thorat Nikhil Viegas Fernanda Wattenberg Martin Corrado Greg Hughes Macduff 2017 10 01 Google s Multilingual Neural Machine Translation System Enabling Zero Shot Translation Transactions of the Association for Computational Linguistics 5 339 351 doi 10 1162 tacl a 00065 ISSN 2307 387X Reynolds Matt Google uses neural networks to translate without transcribing New Scientist Retrieved 15 May 2017 Metz Cade Dawson Brian Felling Meg 2019 03 26 Inside Google s Rebooted Robotics Program The New York Times ISSN 0362 4331 Retrieved 2021 04 08 a b Google Cloud Robotics Platform coming to developers in 2019 The Robot Report 2018 10 24 Retrieved 2021 04 08 a b Zeng A Song S Lee J Rodriguez A Funkhouser T August 2020 TossingBot Learning to Throw Arbitrary Objects With Residual Physics IEEE Transactions on Robotics 36 4 1307 1319 doi 10 1109 TRO 2020 2988642 ISSN 1941 0468 Gu S Holly E Lillicrap T Levine S May 2017 Deep reinforcement learning for robotic manipulation with asynchronous off policy updates 2017 IEEE International Conference on Robotics and Automation ICRA pp 3389 3396 arXiv 1610 00633 doi 10 1109 ICRA 2017 7989385 ISBN 978 1 5090 4633 1 S2CID 18389147 a b Sermanet P Lynch C Chebotar Y Hsu J Jang E Schaal S Levine S Brain G May 2018 Time Contrastive Networks Self Supervised Learning from Video 2018 IEEE International Conference on Robotics and Automation ICRA pp 1134 1141 arXiv 1704 06888 doi 10 1109 ICRA 2018 8462891 ISBN 978 1 5386 3081 5 S2CID 3997350 a b Tanwani A K Sermanet P Yan A Anand R Phielipp M Goldberg K May 2020 Motion2Vec Semi Supervised Representation Learning from Surgical Videos 2020 IEEE International Conference on Robotics and Automation ICRA pp 2174 2181 arXiv 2006 00545 doi 10 1109 ICRA40945 2020 9197324 ISBN 978 1 7281 7395 5 S2CID 219176734 a b Levine Sergey Pastor Peter Krizhevsky Alex Ibarz Julian Quillen Deirdre 2018 04 01 Learning hand eye coordination for robotic grasping with deep learning and large scale data collection The International Journal of Robotics Research 37 4 5 421 436 doi 10 1177 0278364917710318 ISSN 0278 3649 a b c Seurin Mathieu Strub Florian Preux Philippe Pietquin Olivier 2020 10 25 A Machine of Few Words Interactive Speaker Recognition with Reinforcement Learning Interspeech 2020 ISCA ISCA 4323 4327 arXiv 2008 03127 doi 10 21437 interspeech 2020 2892 S2CID 221083446 Vincent James May 24 2022 All these images were generated by Google s latest text to image AI The Verge Vox Media Retrieved May 28 2022 Khan Imad Google s Parti Generator Relies on 20 Billion Inputs to Create Photorealistic Images CNET Retrieved 23 June 2022 Edwards Benj 2022 10 05 Google s newest AI generator creates HD video from text prompts Ars Technica Retrieved 2022 12 28 How Google Retooled Android With Help From Your Brain Wired ISSN 1059 1028 Retrieved 2021 04 08 Google Open Sources The Machine Learning Tech Behind Google Photos Search Smart Reply And More TechCrunch 9 November 2015 Retrieved 2021 04 08 This Is Google s Plan to Save YouTube Time May 18 2015 a b Levy Steven April 25 2013 How Ray Kurzweil Will Help Google Make the Ultimate AI Brain Wired Retrieved February 11 2014 Wohlsen Marcus January 27 2014 Google s Grand Plan to Make Your Brain Irrelevant Wired Retrieved February 11 2014 Hernandez Daniela May 7 2013 The Man Behind the Google Brain Andrew Ng and the Quest for the New AI Wired Retrieved February 11 2014 a b Ray Kurzweil and the Brains Behind the Google Brain Big Think December 8 2013 Retrieved February 11 2014 a b c We read the paper that forced Timnit Gebru out of Google Here s what it says MIT Technology Review Retrieved 2021 04 08 Bender Emily M Gebru Timnit McMillan Major Angelina Shmitchell Shmargaret 2021 03 03 On the Dangers of Stochastic Parrots Proceedings of the 2021 ACM Conference on Fairness Accountability and Transparency Virtual Event Canada ACM pp 610 623 doi 10 1145 3442188 3445922 ISBN 978 1 4503 8309 7 a b c Schiffer Zoe 2021 02 19 Google fires second AI ethics researcher following internal investigation The Verge Retrieved 2021 04 08 Change Google Walkout For Real 2020 12 15 Standing with Dr Timnit Gebru ISupportTimnit BelieveBlackWomen Medium Retrieved 2021 04 08 a href Template Cite web html title Template Cite web cite web a first has generic name help Dave Jeffrey Dastin Paresh 2021 02 04 Two Google engineers resign over firing of AI ethics researcher Timnit Gebru Reuters Retrieved 2021 04 08 a href Template Cite news html title Template Cite news cite news a CS1 maint multiple names authors list link Wakabayashi Daisuke Metz Cade 2022 05 02 Another Firing Among Google s A I Brain Trust and More Discord The New York Times ISSN 0362 4331 Retrieved 2022 06 12 Simonite Tom Tension Inside Google Over a Fired AI Researcher s Conduct Wired ISSN 1059 1028 Retrieved 2022 06 12 Retrieved from https en wikipedia org w index php title Google Brain amp oldid 1189368607 Text to image model, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.