This year, generative AI ate the cake (and had it too). Creativity, once thought to be a strictly human bastion, has been taken over by AI. OpenAI’s DALL.E 2 was pivotal for text-to-image generators, followed by Stability AI’s Stable Diffusion. The beginning of December was marked by ChatGPT, a generative AI chatbot that could write haikus and code with similar ease, from OpenAI again. To say that generative AI is the buzzword, would be an understatement.
However, amidst all this din, Google’s AI subsidiary, DeepMind, has stayed suspiciously silent on the creative front.
This isn’t to say that DeepMind did nothing to be in the news this year. In October, the Demis Hassabis-founded company developed AlphaTensor, the first AI system that could discover novel algorithms to do tedious tasks like matrix multiplication. In July, DeepMind predicted the structure of almost every protein folding in existence, cracking one of the biggest challenges in Biology within a span of 18 months using an AI model called AlphaFold. The model has already led drug development to combat malaria, plastic waste and antibiotics and can quicken the drug discovery.
A couple of weeks ago, the company released AlphaCode, a new coding AI to rival OpenAI’s Codex. AlphaCode beat approximately 50 percent of human coders at CodeContest, a highly competitive programming competition. All of this has been done even as DeepMind stuck to its roots of mastering strategy games. In December, DeepNash, an AI agent built by the company, learned a classic board game called Stratego, to play it as well as a human. (Stratego is more complex than chess and Go and requires more craft than poker).
DeepMind’s absence from the creative sphere could indicate that the company does not subscribe to the hype around these generative models. During a web summit held in Lisbon in October, deep learning skeptic Gary Marcus and political theorist Noam Chomsky discussed how generative AI may be all ‘fun’ and may not lead to AGI.
DeepMind, if anything, is stringent about sticking to its focus of working towards AGI. In an exclusive interview with Analytics India Magazine, the company’s head of research for AI for science and reliability, Pushmeet Kohli mentioned DeepMind’s ‘mission statement’ several times.
Kohli clarified that DeepMind wasn’t averse to generative AI. “We have done a lot of work in this area. And in fact, we have actually published a number of research papers, and we are sort of pushing the boundaries of what is possible. So, even with models like Flamingo, we have made some very important scientific contributions that can’t be ignored,” he said.
Kohli wasn’t wrong. DeepMind had built generative models like Gato and Flamingo, which were a definitive step towards AGI. Gato, launched in May, is a ‘general-purpose’ system that can be taught to perform 604 tasks, including image captioning, playing Atari games, stacking blocks with a robot arm and engaging in a conversation.
Flamingo, released in June, was trained on Chinchilla, an 80-billion parameter LLM. The model can chat with users and answer questions about input images and videos and outperformed all other few-shot learning models on a set of 16 vision language benchmarks. This resulted in Flamingo being able to learn new vision tasks with little to no additional training data.
However, DeepMind was wary about producing models that might make more noise in the community because of fewer safeguards. Case in point, Meta’s Galactica that was pulled down three days after its launch or even the ongoing war between physical artists and AI image generators that are trained on images by these artists.
Kohli explained this safety-first principle, saying, “Deploying generative models comes with its own set of challenges and safety concerns. There are major worries around how these models and the output of the models will be used. DeepMind has always taken safety at its core set of requirements. And so we do a lot of research around first thinking about the implications of facilitating sharing something or deploying it in the wider world.”
“I would say that we are probably one of the leading groups in this area, but in terms of sharing and deploying these models, we have been more thoughtful. We are doing a lot of work on safety and security and for the responsible deployment of these techniques,” Kohli explained.
This idea is evident in the one tool that DeepMind has released. In the first week of December, the company built Dramatron, a AI ‘co-writing’ tool for writing scripts. The tool can come up with fictitious character descriptions, plot points, dialogue and location descriptions to help actual writers rewrite, compile and edit after Dramatron has written a full script. From DeepMind’s perspective, script writing for movies does seem like the safest stride in the creative playground of generative AI so as to avoid any accusations of plagiarism or inaccuracy.
Regular passes expiring on 23rd Dec
Conference, in-person (Bangalore) Machine Learning Developers Summit (MLDS) 2023 19-20th Jan, 2023
Conference, in-person (Bangalore) Rising 2023 | Women in Tech Conference 16-17th Mar, 2023
Conference, in-person (Bangalore) Data Engineering Summit (DES) 2023 27-28th Apr, 2023
Conference, in-person (Bangalore) MachineCon 2023 23rd Jun, 2023
Conference, in-person (Bangalore) Cypher 2023 20-22nd Sep, 2023
Discover special offers, top stories, upcoming events, and more.
Stay Connected with a larger ecosystem of data science and ML Professionals
Stay up to date with our latest news, receive exclusive deals, and more.
© Analytics India Magazine Pvt Ltd 2022