AI February 1, 2023
AI Discovers ‘8 Signals of Interest’ in Search for Extraterrestrials
An artificially intelligent (AI) system has discovered “eight signals of interest” as scientists continue to search for evidence of life beyond Earth.
The “promising extraterrestrial intelligence signals” have never been identified before, researchers wrote in a new paper, ‘A deep-learning search for technosignatures from 820 nearby stars’, published in Nature Astronomy on Jan. 30.
Finding alien signals via AI
Researchers found the signals in five different stars of between 30 and 90 light years from the Earth. The find may indicate the existence of other life forms outside of our planet, they say. But it could also point to rare interference from elsewhere.
Also read: BuzzFeed Shares Soar 300% After Company Announced Plans for AI Content
According to the study, the search for alien intelligence focused on what they called radio “technosignatures”, or signals that indicate the presence of alien technology, or of a society capable of harnessing tech for communication.
“Our AI was trained to search through data from radio telescopes for signals that couldn’t be generated by natural astrophysical processes,” co-author Danny C. Price, said in a write up for The Conversation.
“When we fed our AI a previously studied dataset, it discovered eight signals of interest the classic algorithm missed. To be clear, these signals are probably not from extraterrestrial intelligence, and are more likely rare cases of radio interference.”
Price, a senior research fellow at Curtin University, hopes that their findings could be used to highlight how artificial intelligence techniques “are sure to play a continued role in the search for extraterrestrial intelligence.”
How it works
The team created an algorithm that uses AI methods to classify signals as being either radio interference, or a real candidate. Scientists have a hard time searching for technosignatures because of the huge volume of data and interference with which they have to deal.
Developed by Peter Ma, a University of Toronto student and lead author of the paper, the new system intends to allow scientists to quickly sift through the large datasets coming out from the universe and pick up signals that are noteworthy.
It can also help researchers eliminate “false positives” issued from interfering human tech such as phones, WiFi and satellites. The AI algorithm searched through 150 terabytes of data from the Green Bank Telescope in West Virginia and identified 20,515 signals.
Of these, researchers found “eight signals of interest” that could not be attributed to radio interference, said Price. The search for technosignatures “has focused on radio frequencies because they travel easily through space” and are cost effective.
“To try and verify these signals, we went back to the telescope to re-observe all eight signals of interest. Unfortunately, we were not able to re-detect any of them in our follow-up observations,” he detailed.
Extraterrestrials hard to come by
In 2020, astronomers looking for aliens detected a signal that at first they thought to be a significant find, but later “turned out to be pernicious radio interference.”
Price said that scientists will continue to monitor the eight new candidates even though the “most likely explanation is they were unusual manifestations of radio interference: not aliens.”
“Sadly the issue of radio interference isn’t going anywhere. But we will be better equipped to deal with it as new technologies emerge,” Price says.
AI is now used in virtually all areas of science to help researchers with routine tasks. In the past few years, a burst of capable AI programs such as ChatGPT and Midjourney have showcased the progress made by humans in machine learning.
AI
Italy Bans AI Chatbot Replika – EU Hastening AI Regulation
Replika, US-based artificially intelligent chatbot, was banned in Italy. It was prohibited by Italy’s Data Protection Agency (GPDP), which said on Friday that it cited its risk to minors and emotionally fragile people, Reuters reported.
Replika was launched in 2017 as a San Francisco startup offering customers personalized avatars that talk and listen to them.
Also Read: ‘AI Will Affect White Collar Jobs too’
It was quite popular among English speakers and is free to use. However, it was generating $2 million per month via the sale of bonus features such as voice chats.
#IntelligenzaArtificiale Il #GarantePrivacy dispone la limitazione provvisoria del trattamento dei dati personali degli utenti italiani di #Replika, il #chatbot che genera un "amico virtuale". Troppi i rischi per minori e persone emotivamente fragili ➡️ https://t.co/jDPkYHft0O pic.twitter.com/5xB2nB1tm3
— Garante Privacy (@GPDP_IT) February 3, 2023
Replika is marketed as a “virtual friend,” with abilities to improve the emotional well-being of the user.
During account creation, the platform merely requests a user’s name, email account and gender, GPDP claimed.
“And the ‘replies’ served by the chatbot are often clearly in conflict with the enhanced safeguards children and vulnerable individuals are entitled to. Several reviews on the two main App Stores include comments by users flagging sexually inappropriate contents,” said GPDP.
However, the Italian regulator stated that the AI chatbot’s intervention in the user’s mood “may increase the risk for individuals still in a development stage or in a state of emotional fragility.”
The Italian watchdog cited the lack of an age verification system and the failure to filter users who do not provide their true age.
Replika breaches European Privacy Regulations and processes personal data unlawfully as it cannot be based, even implicitly, on a contract that a minor is unable to sign, GPDP said.
“Replika’s developer, the U.S. company Luka Inc., must notify the Italian authority of measures taken to implement its requirements in 20 days and could be fined up to 20 million euros ($21.80 million), or up to 4 percent of its global annual turnover, may be imposed by the SA,” said GPDP in its statement.
EU Rushing AI Regulation
The launch of OpenAI’s ChatGPT has raised concerns globally. EU industry leader Thierry Breton has announced new AI regulations to address the risks posed by the ChatGPT chatbot and AI technology.
OpenAI has recently launched a premium version of ChatGPT after being rated the fastest-growing consumer platform in history.
Breton said the risk posed by ChatGPT and the AI system underscored the urgent need for rules, which he proposed last year in a bid to set the global standard for the technology and are under discussion in Brussels.
“As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens, but can also pose risks. This is why we need a solid regulatory framework to ensure trustworthy AI brd on high-quality data,” said Breton.
Breton is seeking co-operation from OpenAI and developers of high-risk AI systems to ensure compliance with the proposed AI Act.
“Just the fact that generative AI has been newly included in the definition shows the speed at which technology develops and that regulators are struggling to keep up with this pace,” a partner at a U.S. law firm, said.
AI has become a hot topic and has consistently made headlines due to its rapid development in recent years, attracting the attention of lawmakers for law enforcement.
ChatGPT, in particular, has emerged as the best cheating partner for college students and a major headache for academic institutions.
AI
ChatGPT Makes History With Fastest 100M Users as StackOverflow Stumbles
ChatGPT, the popular chatbot from OpenAI, is now the fastest growing Internet application in history with a record 100 million monthly active users in January, according to a new report by Swiss investment bank UBS.
This comes as web traffic at developer knowledge-sharing platform StackOverflow slumped 11.5% to 247 million since the launch of ChatGPT. OpenAI, a private company backed by Microsoft Corp., launched ChatGPT as a free web interface in late November.
Also read: OpenAI Develops Tool to Spot AI-Written Texts
The AI-powered chatbot became incredibly popular with users thanks to its ability to perform a variety of tasks within seconds. In the U.S. tech hub of Silicon Valley, the conversation tool has virtually sparked off an artificial intelligence (AI) ‘arms race.’
13M ChatGPT users per day
Citing data from analytics firm Similarweb, UBS said in its report that the number of people using ChatGPT averaged about 13 million per day in January – double the number recorded in December. These are all distinct individuals, or unique users, it stated.
“In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app,” UBS analysts noted, according to Reuters.
By comparison, TikTok took about nine months after its global launch to get to 100 million users and Meta’s Instagram two and half years.
ChatGPT is a large language model developed by OpenAI. It has been trained on a diverse range of internet text to generate human-like responses to user prompts. The tool is based on OpenAI’s GPT-3.5 language technology, an upgrade of the GPT-3 model released in 2020.
The chatbot was trained with a lot of data, from books and articles to conversations. It can understand a variety of topics and contexts, and can be fine-tuned for multiple language tasks such as question answering, language translation, and text summarization.
ChatGPT accumulated more than one million users within its first week. The bot has transformed several industries since it was released just over two months ago. It can write complex essays, poetry, code and even pass MBA exams at the Wharton School of Business.
While ChatGPT is popular with some, others are concerned about the impact it will have on education and other industries. Gmail creator Paul Buchheit said recently ChatGPT-like tech will destroy Google’s $149 billion search engine business and dominance in two years.
StackOverflow traffic declines
Meanwhile, the rise of ChatGPT has started to affect rival StackOverflow, the Naspers-owned question and answer website for professional and enthusiast programmers.
According to Similarweb, the number of people visiting the StackOverflow website fell 11.5% just weeks after the launch of ChatGPT. On average, users dropped from almost 279 million in November 2022 to 247.4 million by the end of December 2022.
StackOverflow dropped 30 positions in the “Programming and Developer Software” websites category, falling from the 202nd position in October 2022 to 229th by the end of December 2022. It is the company’s lowest ranking in that category since October 2019, the data says.
Extremely interesting stat shared by @amasad on Moment of Zen
Didn't believe it, had to go look myself
Yup – Stack overflow's traffic down 11% since ChatGPT launch (from SimilarWeb) pic.twitter.com/rZylXM4FTq
— Rachel Woods (@rachel_l_woods) February 1, 2023
The website serves as a platform for users to ask and answer questions, and, through membership and active participation, to vote questions and answers up or down similar to Reddit. They can also edit questions and answers in a fashion similar to a Wiki.
In December, StackOverflow banned the use of answers generated using ChatGPT, citing concerns about the accuracy of such responses.
“…because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers,” the company said in a statement.
As MetaNews previously reported, the computational resources required to run ChatGPT over large scales are very costly. OpenAI has now launched a paid subscription service known as ChatGPT Plus for $20 per month, hoping that will help to cover costs.
AI
Microsoft Warns Employees Not to Share Sensitive Data with ChatGPT
Microsoft has warned its employees not to share sensitive data with an artificially intelligent (AI) chatbot, ChatGPT from OpenAI. Employees of American multinational tech giants had asked in an internal forum whether ChatGPT or any other AI tools from OpenAI were appropriate to use at their work, Business Insider reported.
Also read: 30% of College Students Use ChatGPT
In response to that inquiry, a senior engineer from Microsoft’s CTO office allowed to use ChatGPT but couldn’t share confidential information with the AI chatbot.
“Please don’t send sensitive data to an OpenAI endpoint, as they may use it for training future models,” the senior engineer wrote in an internal post, per Insider.
ChatGPT, here only for two months, is already raising concerns in the academic sector. Microsoft has become a partner of OpenAI, the parent company of ChatGPT, and has confirmed an investment of ten billion dollars.
Microsoft is planning to integrate OpenAI’s technology into its products, including the Bing search engine and other software, to enhance their capabilities, as reported previously.
The major concern of Microsoft regarding “sensitive information” may include sharing internal software code and seeking checks and advice from the chatbot.
Amazon’s Same Concern
ChatGPT has continuously made headlines since its launch last November but has also faced bans, especially in the academic sector as it became the cheating partner for students’ schoolwork. Recently, the tech giants have also raised their concerns over its use.
Amazon warned its employees to beware of ChatGPT last week, as reported by Insider. Insider claims that an Amazon lawyer has urged employees not to share code with ChatGPT via an internal communication form.
“This is important because your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn’t want its output to include or resemble our confidential information (and I’ve already seen instances where its output closely matches existing material),” the lawyer wrote.
The lawyer placed more emphasis on requesting that employees not share “any Amazon confidential information” (including Amazon code they are working on) with ChatGPT via Slack.
Personal Data Concern
As concerns about data privacy grow among large corporations, an OpenAI representative has directed questions about the company’s data and privacy policy to ChatGPT’s FAQ page. The terms of service of OpenAI grant the company the right to use all input and output generated by ChatGPT users, with the stipulation that personally identifiable information (PII) is removed from the used data.
However, it’s quite impossible for OpenAI to identify and remove all the personal information from the data provided to ChatGPT, says Emily Bender, who teaches computational linguistics at the University of Washington.
“OpenAI is far from transparent about how they use the data, but if it’s being folded into training data, I would expect corporations to wonder: After a few months of widespread use of ChatGPT, will it become possible to extract private corporate information with cleverly crafted prompts?” said Bender.
Vincent Conitzer, a computer science professor and director of an AI lab at Carnegie Mellon University, said, “All of us together are going to have to figure out what should be expected of everyone in these situations. Is the responsibility on employees to not share sensitive information, or is the responsibility on OpenAI to use information carefully, or some combination?”
-
AIMon 6 Feb 2023 19:08 GMTItaly Bans AI Chatbot Replika – EU Hastening AI Regulation
-
BusinessMon 6 Feb 2023 16:04 GMTQuest 3 Headset Will Have Better Mixed Reality Tech
-
MetaverseMon 6 Feb 2023 12:39 GMTInterpol Wants to Police Metaverse Crime. But That May Not Be Easy
-
FeaturedMon 6 Feb 2023 10:05 GMTAmericans Turn to Artificial Intelligence to Curb Gun Violence
-
AIMon 6 Feb 2023 07:35 GMTChatGPT Makes History With Fastest 100M Users as StackOverflow Stumbles
-
AISat 4 Feb 2023 08:10 GMTMicrosoft Warns Employees Not to Share Sensitive Data with ChatGPT
-
BusinessFri 3 Feb 2023 20:30 GMTChina Catches Up On Quantum Computers, Makes 1st Delivery
-
FeaturedFri 3 Feb 2023 17:30 GMTOpenAI Develops Tool to Spot AI-Written Texts

