AI is approaching a tipping point for game developers

[ad_1]

  • Text-to-image generators already used by art teams
  • Hardware capacity could be a limiting factor

Artificial intelligence (AI) may not be at the stage of world domination, but its reach has nevertheless expanded considerably over the past year. Programs that scan hundreds of millions of online images to turn text prompts into new images are now widely available online. One industry that is already having an impact is gaming: video game developers are already using the tools to inspire creative works of art. They expect AI to deliver new productivity gains as the technology develops.

In April, the DALL-E 2 image generator was released by OpenAI. It was a big step up from DALL-E, released in 2021, as it allows two images to be merged. For example, rather than just generating an image of a monkey, it can now generate one of a monkey on skis in the style of Van Gogh. As unnecessary as it may seem, it had a wider impact. DALL-E 2 is trained using hundreds of millions of images from the internet and a new image generation process called “diffusion”. It may not sound revolutionary, but it was a revelation when it was released – the creepy nature of the images also raised questions about what can be considered art.

Two similar services, named Midjourney and Stable Diffusion, which both have similar capabilities, followed in July and August. For people in the industry, this year is seen as a watershed moment. “The work of DALL-E 2 and Midjourney is actually a game-changer, they have capabilities we’ve never seen before that appeared seemingly overnight,” said Danny Lange, vice-president. Senior President of AI and Machine Learning at Unity Software (US: U)a $9bn (£7.8bn) market cap game developer.

In an industry where production costs are extremely high, artists are already using these programs to speed up their work and help with creative inspiration. Since the programs can only output 2D images, they are currently more useful for flat backgrounds. “All of our artists use programs, such as Midjourney, especially for mood art,” said TinyBuild (TBLD) general manager Alex Nichiporchik. “But they’re also useful for sparking new creative ideas.”

Gaming could be a big beneficiary of AI because development costs are so high. Blockbuster titles, called AAA, can cost up to $80 million to create. If a big game fails, developers can immediately go from a profit to a loss. Last year, Border developments (FDEV) saw its operating profit fall from £19.9m to £1.5m due to the failure of its Elite Dangerous: Odyssey Game. It ended up having to take a £7.4m writedown. Lowering production costs would lessen the potential downside if a game were to fail like this.

At the same time, the artists fear that they will soon be replaced. However, tools are currently more of a help than a replacement. Writing interesting prompts is a creative skill, and in the future, Lange believes that “writing fast” will become a job in itself. “The outcome of the program is only as good as the input of the artist,” he said.

great steps to come

Video games are 3D animations, so there is a limit to the ability of the existing Midjourney and DALL-E 2 image generators to speed up the development process. Most development time and money is spent improving animations. Nichiporchik believes the major breakthroughs will come when AI can create 3D images and animate characters. “When AI can be used to create a character that moves organically, it will take a lot of procedural work.”

It’s unclear when this breakthrough will happen, but once a discovery is made, other companies can build on this model. The code is often published online for this reason. OpenAI built DALL-E 2 based on the diffusion image synthesis model created by Google Research in 2015. There are already published papers describing methods for generating 3D images; however, the process is currently limited to low resolution images due to the computing power required.

But this cross-industry development is the way forward: innovations made with driverless vehicles could be reapplied in the gaming industry. Software that can further help cars in a real environment is also useful in helping game characters to move in a digital world. “It will only take a leap from the big boys like Google or Tesla; they will then publish that research and it will then be used in the gaming industry,” Nichiporchik said.

In this situation, a game’s characters and environment could be generated in real time as players play. TinyBuild has already published a game called Streets of Snape which uses AI to generate levels for players. The game was created by a single developer, Matt Dabrowski.

Starry Star Shootout: An AI-generated image made with the “Call of Duty Multiplayer Van Gogh” prompt

Streets of Snape is a top-down 2D game, so the processing power needed to play is relatively low. However, today’s computing power would run out of steam if games were 3D and fully AI generated as the player walked through them. DALL-E 2 and Midjourney should be run on cloud computing networks because localized processing is not powerful enough. This wouldn’t work with real-time games because the time it takes to send data to and from the cloud would create lag.

More Peur

Localized increased computing power is needed and currently the ability to downsize microprocessors is reaching a physical limit – Moore’s Law, which predicted that the number of transistors that could be fitted to a microprocessor would double every 24 months, is nearing its limit. end nearly 60 years after it was first identified. Major chip designers such as Qualcomm (US: QCOM) and Nvidia (US: NVDA) know that it’s a chance to differentiate themselves from the competition. “All hardware makers are aware of the need to optimize AI, and mobile devices are now reserving space on their chips for real-time machine learning,” Lange said.

The need for localized computing is also a future problem for Microsoft (US: MSFT) Xbox Game Pass. The pass allows customers to access games through the cloud – instead of downloading an entire game or playing it on disc using a console – and can even be played on newer Samsung TV models . The advantage is that only one controller is needed, so hardware costs are low for users. However, it may be limited to more basic games, if software developments require ever greater computing power.

Lange thinks cloud computing will be problematic for high-tech games, but also warned that space engineers shouldn’t be underestimated. “You could theoretically use AI to predict where the player will walk, to generate their environment before they move, which would remove the latency issue between cloud and device,” he said. .

Frontier Development president and founder David Braben is less euphoric, saying recent advances in AI aren’t a watershed moment, but just another tool for developers to use. “Our industry is one of the fastest growing and every year there is an innovation, AI just puts more capabilities in the hands of talented people,” he said.

That 2022 is considered a pivotal moment in the life of AI, it is clear that it will be useful in the future development of games. Improved AI will reduce costs, but it will happen across the industry, meaning margins could be competitive on price.

The main returns will likely accrue further up the supply chain – with the hardware company producing the equipment that can support AI innovations. Nvidia is well positioned as the market leader in graphics processing units and AI chips. Last month, it released a new chip that uses artificial intelligence to improve graphics. Graphics cards known as “Ada Lovelace” will be produced by Taiwan Semiconductor Manufacturing (US: TSM).

“Game developers are notorious for pushing hardware to the brink, and AI adds a new dimension to them for doing so,” Lange said. It will be fun to see how far they can go.

[ad_2]
Source link

Back To Top