Google Pathways, Machine Translation, and Other Language Industry News
Introducing Google Pathways
Last October, Google Research’s senior vice president Jeff Dean introduced Pathways to the rest of the world. An AI architecture that can piece together previous knowledge to solve new tasks, Pathways is a work in progress whose purpose is to break artificial intelligence out of its cumbersome, inefficient shell. To some, it’s another new experiment Google AI is cooking up. But for others, it’s the next chapter in the story of artificial intelligence nearing the intuitive cerebral processes of the human brain.
So what exactly does Pathways do? It’s a new AI architecture designed to address weaknesses in existing systems and synthesize their strengths. Current AI systems have a number of issues: they’re only designed for specific purposes in mind, meaning they’re pretty useless outside of their context. They’re also reliant on a single sense or mode of input—quite unlike humans, who employ five (or six?) senses to make sense of this world. Finally, they’re dense and inefficient, requiring way too much data and energy for the smallest task. Here’s how Pathways plans to solve these issues, one by one.
Problem 1: Current AI models are trained to do only one thing.
Existing AI systems today are often built from scratch, designed to solve one problem in mind. Dean compares this to our childhood experience of learning to jump rope: “Imagine if, every time you learned a new skill (jumping rope, for example), you forgot everything you’d learned – how to balance, how to leap, how to coordinate the movement of your hands – and started learning each new skill from nothing.” That is how developers and scientists train most machine learning models.
Instead of improving current models so that they are more robust and take on new tasks, scientists build new models from scratch; it’s a tedious and time-consuming process. All that’s left are thousands of models for thousands of tasks. With this method, it takes longer for models to learn each task, as scientists teach the models to learn the world from scratch. This is completely different from how humans approach new tasks; humans apply their previous knowledge to new tasks, identifying parts of the task we already know so as to carry out the new task as efficiently as we can (if we want to, that is.)
Dean and his colleagues at Google propose Pathways as a remedy for this problem in AI; Pathways is a model that can “not only handle many separate tasks, but also draw upon and combine its existing skills to learn new tasks faster and more effectively.” For example, per Dean, a model that learns how to analyze aerial images to predict landscape elevations would be able to apply that knowledge to predicting how flood waters will flow through that landscape. “A bit closer to the way the mammalian brain generalizes across tasks,” explains Dean.
Problem 2: Current AI models focus on one sense.
Another problem with existing AI systems is that they are oblivious to context and/or connected ideas. “Most of today’s models process just one modality of information at a time,” says Dean, “they can take in text, or images or speech – but typically not all three at once.” This differs from how humans take in information: by utilizing multiple senses at the same time to account for the multi-sensual nature of reality.
Scientists hope to solve this problem through Pathways, teaching it to “encompass vision, auditory, and language understanding simultaneously.” Dean offers this enlightening metaphor of how this would work:
So whether the model is processing the word “leopard,” the sound of someone saying “leopard,” or a video of a leopard running, the same response is activated internally: the concept of a leopard. The result is a model that’s more insightful and less prone to mistakes and biases.
Unlike previous models, Pathways would be able to handle more abstract forms of data, says Dean; such abilities of abstraction will allow scientists to deal with more complex systems.
Problem 3: Current AI models are dense and inefficient.
Lastly, Dean points out that existing AI systems are “dense,” which is to say that, to accomplish a given task, a model usually activates the entire neural network, even if the task at hand is simple. This is markedly different from how humans deal with tasks; humans only utilize relevant pieces of information and activate corresponding parts of the brain to solve the situation. “There are close to a hundred billion neurons in your brain,” says Dean, “but you rely on a small fraction of them to interpret this sentence.”
With Pathways, new AI models will be “sparsely” activated; in other words, only small amounts of the network will be utilized when needed. Through this process, AI models will dynamically learn to identify which part of the network is good at which task so that it can allocate certain parts that best fit the needs of a task. Such an architecture, Dean claims, is not only faster and more energy-efficient but also has a larger capacity to learn more kinds of tasks. As an example, Dean brings up GShard and Switch Transformer—two of the largest machine learning models—which already utilize sparse activation. The two models consume less than 1/10th of the energy normally required by similarly-sized models (dense ones); they sacrifice none of the accuracy, however.
The main purpose of Pathways, Dean concludes, is to advance humans “from the era of single-purpose models that merely recognize patterns to one in which more general-purpose intelligent systems reflect a deeper understanding of our world and can adapt to new needs.” This last purpose—reflecting a deeper understanding of our world and adapting to new needs—is particularly important, says Dean, as it will help us in our attempts to fix impending global challenges, mainly ecological. New paradigms in AI modeling will help cut costs and energy, helping to build a more sustainable future in AI research.
So how does any of this relate to translation, or perhaps the relevant fields of machine translation and natural language processing? As explained in our other article on Lokalise’s switch to carbon neutrality, training and using NLP models and machine translation systems consume a surprisingly large amount of energy. With more efficient NLP and MT systems in place, we can start to worry less about the environmental ramifications of our translation work. Efficiency is looking like the new alternative standard, as opposed to accuracy, and this trend toward greener futures is all the more applicable to the AI-intensive field of translation.
Pathways is much like massively multilingual machine translation in the sense that multimodal systems are developed in favor of single-use systems designed for one purpose only. Developments in AI now allow us integrative models that are customizable and applicable in various contexts; other factors such as metadata help increase the extent of such customization. Pathways is the next step in our future of faster, more efficient, and more effective AI models. Sustainability and development, hand in hand, no longer enemies.
Slator Language Industry Job Index Indicates More Jobs for Translators
Slator’s very own Language Industry Job Index (LIJI) saw an upward trend in the March of 2022, indicating more hiring activity in the language industry. Designed to “track employment and hiring trends in the global language industry,” the LIJI is useful for scoping out the current state of the language industry around the world. The Slator LIJI increased by more than three points—to a 2022 high of 176.6, although this is slightly lower than the December 2021 peak of 180.5.
LIJI also recently announced its analysis of the language industry market, reporting that the combined US-dollar revenue of major LSP companies rose by more than 22% in 2021. Among these companies are Japan’s Honyaku Center, which reported that “overall sales for the first three quarters increased 6.1% year-on-year to JPY 7.53 bn (USD 65m)” between April 1 to December 31, 2021. In Australia, Ai-Media—a captioning service provider—saw revenues rise 29%; UK-based RWS also saw revenues rise, crossing the billion-dollar mark for the first time. These are just three of numerous LSP companies that have seen a general upward rise in their revenues—and thus, hiring as well. LIJI also tracked LinkedIn’s data and found that there were more than “600,000 profiles under the Translation and Localization category.”
Things are looking up for the language industry overall; artificial intelligence and machine translation have not driven human translators out, as people once feared would happen. There seems to be a counterintuitive, strained relationship between artificial intelligence and human translation: a relationship that can’t easily be put on a scale of good or bad, right or wrong, in or out, etc. Rather than a mode of competition (between machines and humans), the new paradigm is one of cooperation: human translators harnessing the possibilities of artificial intelligence to assist in their work (as is the case of computer-aided translation tools, among others.) Prospects for translators and linguists, then, are still safe and well-guarded from the reaches of artificial intelligence. There’s even data to back that up.