AI January 16, 2023
Microsoft Makes Big Moves in AI – Is the Firm out to Dominate Our AI Future?
Microsoft Corporation is making big moves in artificial intelligence (AI), part of a deliberate plan by the U.S. tech giant to continue to take a lead in emerging consumer tech while positioning itself as the next dominant force in the future of AI.
The tech company is doubling down on its $1 billion investment three years ago in OpenAI, creators of ChatGPT, with another $10 billion investment. Microsoft has been building its future on artificial intelligence over the past six years, according to CEO Satya Nadella.
At the time, he talked about the importance of “more intelligent apps and services” during an interview with The Verge. A few months later in the same year, Microsoft launched its chat-based interface “conversation as a platform” service.
The firm believed similar interfaces would takeover as the primary way used by humans to interact with the Internet or find information. Now Microsoft is planning to integrate ChatGPT into Bing with a $10 billion investment into the disruptive bot’s parent company, OpenAI. The feature could be added by end of March, reported The Information.
Also read: Could ChatGPT Be Used to Lobby on Behalf of Corporations?
Microsoft to influence OpenAI development
Microsoft will take a 49% stake in a revised OpenAI structure. It would also receive 75% of OpenAI’s profit until such a time when the firm recovers its full investment. The new funding round, which has drawn interest from some of the leading venture funds on Wall Street, values OpenAI at $29 billion.
This is a significant investment for OpenAI, founded in 2015 by Y-Combinator president Sam Altman and others. It could help the company forge ahead with its growth initiatives that include the further development of ChatGPT and other AI-related projects.
For Microsoft, it could mark an important step towards gaining a foothold in AI. By working directly with OpenAI, a pioneer in the industry, the tech firm would gain influence over the direction of OpenAI’s tools – something that’s already disrupting our lives everyday.
Artificial general intelligence refers to everyday use cases for AI including communication, research and productivity. OpenAI’s selling point is making artificial intelligence widely accessible as well as preventing its takeover by bad actors.
Microsoft forayed into the space with a $1 billion funding which secured it the majority stake in OpenAI in 2019. Now it is looking to tighten its grip on the company on the back of ChatGPT’s massive popularity.
Founded by billionaire Bill Gates, the Washington-based corporation also plans to integrate OpenAI’s chatbot technology into other products, including Microsoft Word and Outlook. Uniquely responding to queries in conversational language, AI-powered ChapGPT promises a whole new way to experience search.
Microsoft solidifies AI position, as Google slackens
Microsoft is solidifying its takeover at a time when Google has been reluctant to roll out search backed by artificial intelligence, cautious of racial and gender biases still reflected in AI. Still on peerless data troves, Google may be working behind the scenes for a humanly nuanced foray into AI while Microsoft bets on a first-mover advantage.
Microsoft is engaged in another turf war with Google Search through its involvement with You.com, a non-tracking AI search engine committed to both privacy and personalization. In addition to using Bing on the backend, the search engine is also using OpenAI-owned GPT-3 for YouWrite, a text generation protocol.
A cursory look at similarly modeled engines shows that they are minimalist and lack customization options whereas Google’s cross-performing products give it a self-contained feel. You.com is on the maximalist side, thanks to AI, and designed for personalization, while remaining committed to privacy.
Beyond search, ChatGPT has notoriously threatened a civilizational dent due to its ability to respond to intellectual prompts including generating images, essays, even Kindle-worthy novel down to fine contextual specifications.
However, OpenAI CEO Sam Altman warns that “it’s a mistake to be relying on [ChatGPT] for anything important right now.”
ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.
it's a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.
— Sam Altman (@sama) December 11, 2022
OpenAI losing its AI-for-humanity appeal
According to a Semafor report, OpenAI’s structure will “be revised to reflect Microsoft ownership at 49%, along with other investors jointly owning 49%, and OpenAI’s non profit owning 2% of the new entity.”
Elon Musk who founded OpenAI with Altman in 2015 left the company, citing potential conflict of interest with Tesla’s AI direction. The company went from non-profit to capped-for-profit but may now be entering its era of private profit motive under Microsoft.
Despite promising wide assessibility to AI, the company’s ChatGPT already follows U.S. foreign policy in restricting access to America’s geopolitical rivals and countries under sanctions.
Even if this is not stated policy, it is the procedure followed by some banks dealing with countries under sanctions, while bigger tech companies have subliminal dalliances with the State Department.
It remains to be seen how OpenAI’s absorption by Microsoft will affect its original AI-for-humanity selling point. As MetaNews reported recently, Bill Gates has spoken up artificial intelligence and augmented reality (AR) while casting aspersions on the metaverse.
AI
Could Sci-Fi Movies Like Terminator Have Shaped Our Fears of AI?
The British Computer Society CEO Rashik Parmar believes that AI threats to humanity are overstated. He said concerns being expressed “play to the fears that most of society has” and have been shaped by popular science fiction films like Terminator and Ex Machina.
His comments come in the wake of a recent statement from US-based Centre For AI Safety warning of “the risk of extinction from AI.” Signed by CEOs from OpenAI and Google, the letter says the risks should be treated with the same urgency as pandemics and nuclear war.
“There should be a healthy scepticism about big tech and how it is using AI, which is why regulation is key to winning public trust,” said Parmar, a former IBM chief technology officer for Europe, Middle East and Africa, according to local media reports.
“But many of our ingrained fears and worries also come from movies, media and books, like the AI characterizations in Ex Machina, The Terminator, and even going back to Isaac Asimov’s ideas which inspired the film I, Robot.”
Also read: AI Code of Conduct Coming ‘Within Weeks’ Says US and Europe
Movies fuel AI fears
The development of AI has raised concerns about its potential to be used for harmful purposes, such as discrimination, surveillance, and nuclear war. There have also been concerns about the potential for artificial intelligence to create mass unemployment.
In March, several luminaries including Apple co-founder Steve Wozniak, billionaire Elon Musk, Gary Marcus, and others, signed an open letter calling for a six-month moratorium on large language AI training in the world.
And then Geoffrey Hinton – considered the “godfather of AI” – quit his job at Google last month with a warning that it could fuel disinformation and cause massive job losses.
For American computer scientist Eliezer Yudkowsky, the risks of AI cannot be managed through regulation alone. He believes that the development of AI poses an existential threat to humanity and that the only way to deal with the threat is to shut it all down completely.
Terminator
Parmar explained that people who are more familiar with AI through Hollywood movies are more likely to believe that it poses a threat to humanity. He said the concerns that are being expressed “play to the fears that most of society has”.
“They come from what they’ve seen in the movies. They’re amazing, you watch Terminator and you think that it’s real and that it’s going to come and kill you any second now,” said Parmar.
“It’s a killing machine, that throughout the films uses AI in different ways – interpreting what’s been done, predicting the future and responding to different situations. AI isn’t explicitly mentioned but you know it’s AI that’s doing this,” he added.
Responsible development
In science fiction films like Terminator, Ex Machina and The Matrix, AI is often portrayed as a threat to humanity. The films depict artificial intelligence systems that become self-aware and decide to exterminate their human creators.
Although the movies are works of fiction, they have helped to shape public perceptions of AI, according to Parmar. He noted AI is not as powerful as Hollywood would have you believe, and that the systems are not yet capable of independent thought or action.
“AI is just a bit of software and no bit of software has any intention, it’s not sentient,” Parmar stated, urging balance and responsibility in the development of artificial intelligence.
“There are legitimate concerns about AI, which is why we need to make sure it grows up responsibly,” he said.
“It needs to be developed by ethical professionals, who believe in a shared code of conduct.” The British Computer Society chief executive officer blamed the media for “feeding off these fears” to create misconceptions about the dangers of AI.
“Do films and the media have to change? No. It just proves we need more public education about the reality of AI, and for it to be part of the skills and teaching we get when we’re very young,” Parmar added.
AI regulation
Regulators from around the world have started to pay more attention to AI in recent months. This past week, European Commission Vice President Margrethe Vestager said the EU and the United States expect to draft a voluntary code of conduct on artificial intelligence within weeks.
⚠️Accountability on #AI can't wait. It is NOW. Today #TTC kicked off work on a 1st voluntary AI #CodeOfConduct. We’ll work with our key partners & the #AI community on #safeguards to make AI responsible, safe & trustworthy. This is a huge step in a race we can't afford to lose. pic.twitter.com/WBcazIysiK
— Margrethe Vestager (@vestager) May 31, 2023
She said the U.S. and the EU should promote a voluntary code of conduct for AI to provide safeguards as new legislation is being developed. In May, leaders of the so-called G7 nations met in Japan and called for the development of technical standards to keep AI “trustworthy”.
China’s Cyberspace Administration has already issued new regulations that ban the use of AI-generated content to spread “fake news.” In Australia, Industry and Science Minister Ed Husic said regulation is coming soon.
AI
Japan Leads the Way by Adapting Copyright Laws to the Rise of AI
In a groundbreaking move, the Japanese government announced that copyrighted materials used in artificial intelligence (A.I.) training would not be protected under intellectual property laws, according to local media reports.
The Minister for Education, Culture, Sports, Science, and Technology, Keiko Nagaoka, confirmed this decision. Nagoka stated that it was applicable to A.I. datasets regardless of their purpose or source.
The policy shift was a response to the increasing significance of A.I. across various industries, including robotics, machine learning, and natural language processing.
Japan aims to foster an open and collaborative environment by exempting A.I. training data from copyright restrictions to stimulate innovation and progress.
This move has sparked a global conversation about the evolving relationship between artificial intelligence and intellectual property rights, raising important questions about balancing innovation and copyright protection.
A.I. training, copyright laws, and fair use policy
Japan’s decision to exempt A.I. training data from copyright laws has sparked global discussions on the delicate balance between intellectual property protection and A.I. advancements.
The Japanese copyright strategy is similar to the United States Fair Use Policy. The Fair use policy promotes freedom of expression by permitting the unlicensed use of copyright-protected works in certain circumstances. Most European countries also have an open policy toward using copyrighted materials in A.I. training.
Over the past months, several high-profile cases have involved A.I. training and copyright law. The U.S. House Judiciary Committee recently held a hearing examining the intersection of generative A.I. and copyright law.
Speaking at the committee hearing, Sy Damle, a former General Counsel of the U.S. Copyright Office, argued in support of the fair use policy, describing the use of copyrighted works to learn new facts as “quintessential fair use.”
How does this impact the A.I. industry?
Several experts have aligned with Japan’s notion that removing copyright barriers in A.I. training will expedite the development of innovative solutions, ultimately driving economic growth in AI-dependent sectors.
Additionally, the move could prompt a reassessment of copyright laws in other nations as governments grapple with the challenges presented by A.I. technology.
While its long-term impact remains uncertain, Japan’s bold step signifies a significant milestone in the global conversation surrounding A.I., copyright, and the necessary legal frameworks to support these emerging technologies reshaping our world.
Japan warns OpenAI about collecting sensitive data
Reuters reported that Japanese regulators had warned OpenAI against collecting sensitive information without people’s consent.
Japan’s Personal Information Protection Commission told the ChatGPT-creator to minimize its collection of sensitive data for machine learning, adding that it may take action against the firm if its concerns persist.
The warning is coming amid reports that over half of Japan’s population wants more stringent control of the A.I. sector. According to the report, there is widespread concern among the people about the general use of such tools.
Meanwhile, Japan is not the only country concerned about OpenAI’s data collection methods. Earlier in the year, Italy temporarily banned ChatGPT over privacy concerns.
AI
Metaverse Experiences Must Be Worth Having, Says Stephenson
The success of the metaverse depends on the ability of developers to build quality experiences that “millions or billions” want to have. To do that the sector must attract, find, and financially incentivize the very top talent from related industries.
This is the verdict of Neal Stephenson, the man credited with coining the word metaverse in his 1992 novel Snow Crash.
Source the best developers
Famed author and futurist Neal Stephenson says the metaverse must find and attract the most talented people to make the sector a success. Stephenson’s comments came during an appearance at AWE USA 2023 convention on Wednesday.
“If we’re going to have a metaverse that’s being used all the time by millions or billions of people, then there have to be experiences in the metaverse that are worth having,” Stephenson said.
“That seems like an obvious statement but for me, there’s a glaring and frustrating lack of support for the kinds of people who make those experiences,” added the author. “Right now the skill set that is needed to create the metaverse is basically what you see in the game industry. People who know how to use game engines and how to create the assets that feed into those game engines. Those people by and large have jobs and other things they could be doing.
“We need to create the economic basis for [developers] to get rewarded if they succeed in creating metaverse experiences that a lot of people enjoy.”
Stephenson cited a number of ways that developers may be rewarded, but his personal vision is for a tokenized metaverse owned and controlled by its citizens.
In June last year, Stephenson announced Lamina1, a layer 1 blockchain infrastructure and open metaverse company. Stephenson co-founded the “batteries-included blockchain” with Peter Vessenes, reflecting their vision for an incentivized metaverse that, according to its website, could “empower a new generation of interconnected, immersive experiences across gaming, entertainment, fashion, music, and beyond.”
Seeing double: Ori Inbar and Ori Inbar introduce Neal Stephenson
A tale of two metaverses
Ori Inbar, the CEO of AWE, hosted the conversation with Stephenson on what marked the opening of the 14th annual AWE convention. The latest event is running from May 31 to June 2 in Santa Clara, California. Those who can’t attend in person are invited to participate online.
In an entertaining introduction, a virtual facsimile of Inbar initially addressed conference attendees, only for the real Inbar to interrupt and reveal the entire monologue was written by ChatGPT.
Inbar then asserted that AI makes “original voices… even more valuable than before.”
Once Inbar sat down with Stephenson the pair discussed just how far technology is developing across multiple fields. Inbar asked Stephenson where he believed the metaverse was headed; to something dystopian or something utopian.
“I think it would be very early and premature to make announcements today about whether it is going in one particular direction,” said Stephenson.
To Stephenson, both the positives and the negatives of the metaverse can co-exist within just one reality – something he explored in his 1992 novel.
“Our initial exposure to the metaverse is a kind of very vast market, a lowest common denominator to include … the worst of television,” said Stephenson said as he described the surface level of the metaverse. “But later on, as we get farther into the book, we see that people have used it to make beautiful works of art. There are some people … who lavished a lot of time and attention on making homes in the metaverse that are exquisite works of art, both visually and in this sonic environment.”
That ambition of a deeper metaverse is worth striving for, especially as a driver of its long-term success.
-
AIMon 5 Jun 2023 07:25 GMT
Could Sci-Fi Movies Like Terminator Have Shaped Our Fears of AI?
-
AISat 3 Jun 2023 06:45 GMT
Japan Leads the Way by Adapting Copyright Laws to the Rise of AI
-
BusinessFri 2 Jun 2023 14:00 GMT
Twitter Now Worth Only a Third of Musk’s $44B Purchase Price
-
AIFri 2 Jun 2023 09:30 GMT
Metaverse Experiences Must Be Worth Having, Says Stephenson
-
FeaturedFri 2 Jun 2023 07:40 GMT
Mark Zuckerberg Unveils $500 Meta Quest 3 VR Headset
-
AIThu 1 Jun 2023 18:00 GMT
Baidu Is Rolling Out a $145M Venture Capital AI Fund
-
AIThu 1 Jun 2023 13:30 GMT
AI Code of Conduct Coming ‘Within Weeks’ Says US and Europe
-
BusinessThu 1 Jun 2023 06:36 GMT
Metaverse Gaming Market Expected to Reach $119.2 Billion by 2028