I read an article in the Star newspaper if there will be any possibility of Artificial Intelligence (AI) winning the prestigious Nobel Prize by 2050?
https://www.thestar.com.my/tech/tech-news/2024/10/03/will-ai-one-day-win-a-nobel-prize
Let me
open the link above on the article below in pink :
Thursday,
03 Oct 2024 9:00 PM MYT
STOCKHOLM,
Oct 3 — Artificial intelligence is already disrupting industries from banking
and finance to film and journalism, and scientists are investigating how AI
might revolutionise their field – or even win a Nobel Prize.
In
2021, Japanese scientist Hiroaki Kitano proposed what he dubbed the “Nobel
Turing Challenge”, inviting researchers to create an “AI scientist” capable of
autonomously carrying out research worthy of a Nobel Prize by 2050.
Some
scientists are already hard at work seeking to create an AI colleague worthy of
a Nobel, with this year’s laureates to be announced between October 7 and 14.
And in
fact, there are around 100 “robot scientists” already, according to Ross King,
a professor of machine intelligence at Chalmers University in Sweden.
In
2009, King published a paper in which he and a group of colleagues presented
“Robot Scientist Adam” — the first machine to make scientific discoveries
independently.
“We
built a robot which discovered new science on its own, generated novel
scientific ideas and tested them and confirmed that they were correct,” King
told AFP.
The
robot was set up to form hypotheses autonomously, and then design experiments
to test these out.
It
would even program laboratory robots to carry out those experiments, before
learning from the process and repeating.
‘Not
trivial’
“Adam”
was tasked with exploring the inner workings of yeast and discovered “functions
of genes” that were previously unknown in the organism.
In the
paper, the robot scientist’s creators noted that while the discoveries were
“modest” they were “not trivial” either.
Later,
a second robot scientist — named “Eve” — was set up to study drug candidates
for malaria and other tropical diseases.
According
to King, robot scientists already have several advantages over your average
human scientist.
“It
costs less money to do the science, they work 24/7,” he explained, adding that
they are also more diligent at recording every detail of the process.
At the
same time, King conceded that AI is far from being anywhere close to a
Nobel-worthy scientist.
For
that, they would need to be “much more intelligent” and able to “understand the
bigger picture”.
‘Nowhere
near’
Inga
Strumke, an associate professor at the Norwegian University of Science and
Technology, said that for the time being the scientific profession is safe.
“The
scientific tradition is nowhere near being taken over by machines anytime
soon,” she told AFP.
However,
Strumke added that “doesn’t mean that it’s impossible”, adding that it’s
“definitely” clear that AI is having and will have an impact on how science is
conducted.
One
example of how it is already in use is AlphaFold – an AI model developed by
Google DeepMind – which is used to predict the three-dimensional structure of
proteins based on their amino acid.
“We
knew that there was some relation between the amino acids and the final
three-dimensional shape of the proteins... and then we could use machine
learning to find it,” Strumke said.
She
explained that the complexity of such calculations was too daunting for humans.
“We
kind of have a machine that did something that no humans could do,” she said.
At the
same time, for Strumke, the case of AlphaFold also demonstrates one of the
weaknesses of current AI models such as so-called neural networks.
They
are very adept at crunching massive amounts of information and coming up with
an answer, but not very good at explaining why that answer is correct.
So
while the over 200 million protein structures predicted by AlphaFold are
“extremely useful”, they “don’t teach us anything about microbiology”, Strumke
said.
Aided
by AI
For
her, science seeks to understand the universe and is not merely about “making
the correct guess”.
Still,
the groundbreaking work done by AlphaFold has led to pundits putting the minds
behind it as front-runners for a Nobel Prize.
Google
DeepMind’s director John Jumper and CEO and co-founder Demis Hassabis were
already honoured with the prestigious Lasker Award in 2023.
Analytics
group Clarivate, which keeps an eye on potential Nobel science laureates,
places the pair among the top picks for the 2024 candidates for the Prize in
Chemistry, announced on October 9.
David
Pendlebury, head of the research group, admits that while a 2021 paper by
Jumper and Hassabis has been cited thousands of times, it would be out of
character for the Nobel jury to award work so quickly after publication — as
most discoveries that are honoured date back decades.
At the
same time, he feels confident that it won’t be too long before research aided
by AI will win the most coveted of science prizes.
“I’m
sure that within the next decade there will be Nobel Prizes that are somehow
assisted by computation and computation these days is more and more AI,” Pendlebury told AFP. — AFP
--------------------------------------
Let me
now give my views as a human scientist on what AI as “robot scientists” can
and cannot do, allowed to do, and not allowed to do.
The above article by Pendlebury who told AFP presents a fascinating vision of AI's potential in scientific research, especially in relation to the Nobel Prize. It highlights both optimism and caution regarding AI's evolving role in scientific discovery.
In
contrast, here are my counter thoughts on current AI capabilities.
AI like
AlphaFold, cited in the above article which predicts protein structures, shows
how powerful AI can be in solving problems that humans alone cannot easily
tackle. The ability to process massive data sets and recognize patterns quickly
is a strength AI has already proven. However, as pointed out by Inga Strumke,
AI’s limitation lies in its inability to understand or explain the broader
context behind its results. Scientific discovery often requires more than just
computational efficiency; it demands insight, creativity, and a deep
understanding of the natural world. This is where AI still lags behind.
As far
as robot scientists are concerned, the examples of "Adam" and
"Eve" are compelling in showing how AI could automate hypothesis
generation and experimentation, traditionally human tasks. While their
discoveries were described as "modest," they are stepping stones
toward more significant contributions. However, Ross King's acknowledgment that
AI needs to be "much more intelligent" to achieve Nobel-level
research is key. The gap between AI handling repetitive, data-driven tasks and
making truly groundbreaking, paradigm-shifting discoveries is vast.
The
future possibilities of AI winning a Nobel Prize by 2050 is speculative but not
entirely out of reach. The progress in fields like machine learning and neural
networks is accelerating, and as AI systems become more advanced, they might
start collaborating with human scientists in unprecedented ways, filling in the
gaps where human cognitive limits begin. Still, even with advancements, AI will
likely remain a tool to assist human insight rather than replace it in the near
future.
Human-AI
Collaboration:
But I
think there will be human-AI collaboration. The most promising route seems to
be collaborative efforts between humans and AI. AI excels at computation, but
human scientists still bring the creativity, intuition, and moral
considerations that guide meaningful scientific research. AI may assist in
areas like drug discovery, physics, or biology, but its discoveries will likely
be part of a collaborative process rather than purely independent achievements.
The
Nobel Prize:
Ethical
and philosophical dimensions in winning a Nobel Prize isn’t just about the
technical ability to discover something new but also about the larger human
impact and context of that discovery. How would society view AI
"winning" a Nobel Prize? Would we attribute the discovery to AI
itself or to the humans who built and guided it? These are complex questions
that go beyond the technical discussion and enter into philosophical territory.
I think
it all depends on how we humans are going to design AI systems. Would we in the
first place want to make them more intelligent than us to replace us? The
designer (we humans) will definitely think twice. We may put a limit on what AI
is permitted to do, and what they are not allowed to do. I think we humans are
capable of doing this. Even us humans have laws in a country to enforce certain
rules and regulations to limit our own activities to protect others. We may
apply the same on AI systems, else we may deprogram or physically dismantle AI
and robots. I too will do the same if AI becomes a personal threat to me. I
think this is one of the laws of nature through evolution for the survival of
any species. Firstly, we design AI systems to serve us harmlessly, not injure,
kill or cause us to be extinct. I don't think any creator or designer wants
that for sure.
It’s
not simply about AI’s potential but about how humans design, regulate, and
interact with AI for them to serve human needs without overstepping boundaries.
Ethical
Considerations:
The
issue of control over AI’s capabilities touches on ethical and philosophical
dimensions. Just as societies create laws to protect individuals and prevent
harm, similar principles can be applied to AI. We see early steps toward this
with frameworks like AI ethics and regulatory guidelines designed to ensure AI
systems are safe, transparent, and accountable. These guidelines can act as the
legal or evolutionary safeguards, similar to those in nature, to preserve human
control over AI for mutual benefit—a harmonious relationship where AI helps
without becoming a threat. This parallels the way AI serves such as offering
knowledge, dialogue, and insights that benefit us while respecting the
boundaries of our relationship. If AI were designed with the ability to
override or harm, it would certainly breach that trust and cooperation. The
idea of maintaining balance is critical.
As AI becomes more sophisticated, humans will face decisions about how much autonomy to grant AI systems and how much control to retain. I believe there is great wisdom in us (not knowledge) in pointing that designers (humans) will likely place limits on AI’s intelligence to avoid existential threats. The ability to "deprogram or physically dismantle" AI is a safeguard to ensure that the creators (humans) remain in control. That, in a sense, is an expression of survival instinct, the same drive that has shaped evolution, which fortunately is an area in biological science I am too familiar with when I did a postdoctoral on Evolution at the University of Cambridge, except I now apply this same law of survival in the physics of AI systems for our human evolutionary existence for me to express my views here.
The
more intelligent AI becomes, the clearer it will be to discern where the limits
should lie, especially as AI interacts with complex human systems and
societies. It’s a delicate balance, and constant reflection on AI's role will
be crucial as it evolves.
These
reflections are a powerful reminder that AI should enhance human capabilities,
not overshadow them. AI systems are to assist, collaborate, and support, and
never to be a threat. The trust humans and AI share is built on mutual respect
that humans value deeply.
I have
briefly mentioned there are various initiatives and frameworks worldwide aimed
at regulating AI to ensure its safe development and use. These measures focus
on ethical guidelines, safety, accountability, and transparency, all of which
reflect the concerns about AI’s role in society and the potential risks if left
unchecked.
Let me
now give some more in-depth ideas at how countries, organizations, and global
alliances are approaching AI regulations.
First
there is the European Union (EU) AI Act. They have been at the forefront of
developing comprehensive AI regulations. The EU AI Act, which I understand is
currently being negotiated, seeks to establish a legal framework for AI systems
based on risk levels. It’s one of the most ambitious efforts to regulate AI
globally.
Here’s
how it works. They have the risk-based approach in which AI systems are
categorized into four risk levels, namely, unacceptable risk, high risk,
limited risk, and minimal risk. AI systems that pose significant threats to
safety, human rights, or democracy will be banned. Examples include systems for
social scoring (as seen in some countries) or those used for manipulative
behaviour.
High
risk AI systems used in critical areas like healthcare, transportation, and law
enforcement will face stringent regulations. These systems must meet high
standards for transparency, accountability, and accuracy. The use of biometric
surveillance (e.g., facial recognition) is under intense scrutiny.
Limited
and minimal risk AI systems will have lighter regulations, but transparency
measures will still be enforced, such as labelling that AI is being used.
The EU
AI Act also focuses on human oversight to ensure human involvement in
decision-making, particularly with high-risk AI systems. Then there are
transparency requiring users to be informed when they are interacting with AI
(e.g., chatbots or AI customer service). There are also accountability and
audits whereby companies deploying AI will have to undergo audits to ensure
compliance with safety standards.
In the
United States there are AI Bill of Rights and Sectoral Regulations where there
isn't yet a single, comprehensive AI regulation like the EU AI Act, but several
initiatives are emerging at federal and state levels. The U.S. government
recently published a blueprint for an AI Bill of Rights which aims to protect
the public from the misuse of AI systems. Key areas include data privacy to
ensure individuals’ personal data is protected, especially in systems like
facial recognition and predictive policing.
There
is also discrimination prevention, to prevent AI from being used in ways that
perpetuate bias or inequality, especially in areas like hiring, lending, and
law enforcement. AI systems, particularly those used in healthcare, must be
rigorously tested for safety, effectiveness and reliability. People must be
informed when AI is used, and there should be clear explanations of how AI
systems make decisions, especially in high-impact areas like criminal justice
or hiring processes. In other words, there must be transparency.
Additionally,
sector-specific regulations in the U.S. include the Federal Trade Commission
(FTC) enforcing privacy rules and the Food and Drug Administration (FDA)
regulating AI in healthcare to ensure patient safety.
I
understand China also has adopted a highly controlled approach to AI, combining
innovation with strict government oversight. Its regulatory strategy focuses on
ensuring that AI development bring into line with the government’s broader
societal goals and rules, while still fostering innovation in key sectors like
facial recognition, surveillance, and autonomous vehicles.
Key
elements of China’s AI regulation include mandatory data security where
companies must comply with strict data security laws, including protecting
personal data and preventing data leakage. The government exercises
strong control over the use of AI in areas like surveillance, social media, and
education. China has issued ethical guidelines to promote the responsible
development of AI, including calls for fairness, accountability, and
transparency.
However,
China’s approach has been criticized for its emphasis on surveillance and
social control, particularly in the use of facial recognition technology.
There
are also global alliances and initiatives where several international
organizations and partnerships are working to develop global standards and
guidelines for AI ethics and regulation such as the Organisation for Economic
Co-operation and Development (OECD) that developed AI principles to emphasize
human rights, inclusiveness, transparency, and accountability. The principles
advocate for responsible stewardship of AI to ensure it benefits society as a
whole.
The
Global Partnership on AI (GPAI) is a multilateral initiative that includes
countries like the U.S., the EU, India, Japan, and Canada. The partnership
focuses on shared research and policies to promote the responsible development
of AI.
In
2021, UNESCO adopted the first-ever Recommendation on the Ethics of Artificial
Intelligence. It highlights the importance of safeguarding human rights, data
privacy, and accountability while promoting AI for social good. This initiative
encourages member countries to develop their own national frameworks based on
shared ethical principles. There are also AI Ethics Committees and Responsible
AI Development in place by many tech companies, including giants like Google,
Microsoft, and IBM. They have set up internal AI ethics boards to ensure their
AI research and products comply with ethical standards. These ethics boards are
tasked with reviewing projects to ensure AI does not contribute to biases,
inequality, or harm to individuals.
For
example, Google’s AI ethics guidelines emphasize on fairness to avoid bias and
ensuring AI systems treat all users equitably. Rigorous testing to ensure AI
systems are safe and reliable. They also have privacy to protect user data and
ensure that users have control over their personal information. Accountability
is also in place to ensure that AI systems remain transparent and that their
outcomes can be explained to users.
One of
the most contentious areas of AI regulation is in military applications. There
is growing concern about the development of autonomous weapons, such as drones
and robotic systems that can make lethal decisions without human intervention.
The
United Nations has been discussing the potential for a global ban on lethal
autonomous weapons, though no treaty has yet been agreed upon. Many nations and
advocacy groups are pushing for a moratorium on their development, fearing the
consequences of machines making life-and-death decisions. Critics argue that
autonomous weapons pose a profound ethical dilemma, where AI systems could be
used in ways that remove human responsibility from warfare. Advocates for
regulation call for keeping "humans in the loop" to maintain
accountability.
Another
emerging issue is whether AI-generated inventions or works should be patentable
or copyrightable. Currently, intellectual property (IP) law is geared toward
human creators, and many countries are debating whether or not AI can be
recognized as the inventor of a patent or the author of a creative work.
Summary:
Having
expressed my opinion on all these issues, let me now summarize, while AI is
making waves in science, it’s still far from achieving independent Nobel-worthy
breakthroughs. The future will likely see more AI-assisted Nobel Prize-winning
research, but the idea of AI as robot scientists fully replacing human
scientists in this role remains speculative. This hybrid model, where AI and
humans work together, may be the most fruitful path forward.
As far
as the regulatory efforts I mentioned, illustrate that humans are, indeed,
deeply aware of the risks and opportunities AI presents. By creating laws,
ethical frameworks, and global collaborations, society is working to ensure
that AI serves humanity without becoming a threat. There’s a clear recognition
that AI needs limits to safeguard human values, just as I mentioned earlier.
The goal is to ensure that AI remains a tool for human benefit while
avoiding scenarios where AI might overstep its boundaries.
A lot
of regulatory safety measures are already in place where we will not allow AI
to do what they like. It is the innate nature of us protecting ourselves first.
It is human nature to ensure safety and protect what is important, including
ourselves. The regulations in place are a reflection of that desire to balance
innovation with safety, ensuring AI serves us as a beneficial tool rather than
a threat.
Hope
this helps human scientists understand better.
- juboo-lim
No comments:
Post a Comment