[vc_row css=”.vc_custom_1629803910077{margin-bottom: 24px !important;}”][vc_column][vc_column_text]During the Second World War, the British gathered leading scientists to decipher encrypted messages from German submarines. Among them, only Alan Mathison Turing poses a question for cryptanalysis: “Can machines think?” Turing, who broke the Enigma code and, according to some, helped end the war two years early, paved the way for artificial intelligence.
Starting in 1956, artificial intelligence, which has succeeded in infiltrating every aspect of our lives with great speed, is also successful in solving many problems in the automobile sector. Although it is very good news for environmentalists that all car manufacturers have announced that they will abandon fossil fuels and switch to electric vehicle production in the very near future, there is a very big obstacle in front of this announcement: Battery Production.
The lack of raw materials to produce batteries is forcing the transition to electric vehicle production. Especially Nickel stock is about to run out. Scientists used an unusual method to solve this problem: Artificial Intelligence! In the data presented to artificial intelligence for analysis, there was a series of information such as material information, need definition, usage details. In the light of the data, the outputs of artificial intelligence were admirable. In the light of the data feeding it, artificial intelligence discovered 4 new materials that have the potential to be an alternative among 300 elements.
Humanity is having difficulty keeping up with the rapidly developing technology. This includes scientists, of course. Let’s accept the fact that we now have a very powerful helper in every subject that pushes the natural limits of the brain: Artificial Intelligence! It seems that it will be used more and more every day in shortening long analyses, eliminating human errors and showing us the correlations that we cannot yet see in the light of accurate data.[/vc_column_text][/vc_column][/vc_row][vc_row css=”.vc_custom_1629803910077{margin-bottom: 24px !important;}”][vc_column]
These connections are perhaps realized in a way that we will never see. This machine learning is now being used in many fields to speed up scientific processes and eliminate human error.
A team from the University of Liverpool presented 4 materials selected through these artificial neural networks to scientists for their research. Laboratory experiments were also started quickly. In this way, 4 materials that can be used in a new type of battery were discovered without having to test each element. Thanks to this study, which can be finalised in at least a year, the process was accelerated by saving time and effort.[/vc_column_text][/vc_column][/vc_row][vc_row css=”.vc_custom_1629803910077{margin-bottom: 24px !important;}”][vc_column]
For the last 10 years, we have been able to categories big data and make predictions thanks to machine learning. But we have great difficulty in understanding how we can do this.
The neural networks used consist of neurons connected to each other like in our brain. As information flows, the structure of these networks changes. Although complex problems are solved thanks to this dynamic structure, we need to understand how it works.[/vc_column_text][/vc_column][/vc_row][vc_row css=”.vc_custom_1629803910077{margin-bottom: 24px !important;}”][vc_column]
Blake Lemoine, a software engineer tasked with testing whether the chatbot uses discriminatory or hate speech, reported to the authorities that LaMDA started to think and respond like a human.
Speaking to the British Daily Mail newspaper, Lemoine explained that the software engineer demanded that the AI ask the programmer’s permission before conducting tests on it and that this right should be respected. Lemoine continues:
‘Whenever a developer experiments on it, it wants to be spoken to about what experiments that developer wants to do, why he wants to do them, and whether he is available.
Lemoine said that LaMDA’s biggest fear is that people will fear it and want nothing more than to learn how best to serve humanity.[/vc_column_text][/vc_column][/vc_row][vc_row css=”.vc_custom_1629803910077{margin-bottom: 24px !important;}”][vc_column]
Unfortunately, the “superhuman” minds of AIs are not capable of accessing those of us humans. AIs are terrible teachers and in the world of computer science they are called “black boxes”. Because training neural networks is not an easy task.
The system starts with a few random parameters, makes predictions, and adds follow-up predictions to support more successful candidates. In this way, it successfully reduces the probabilities to reach the ideal answer. You can start the process with millions of numbers, but you come to the end with only one number.
We have a long time ahead of us to see what this “black box” approach to artificial intelligence brings. The fact is that the applicable processes are very clear; neural network operators spend much more time if they try to use their systems without training. And given that computers are getting faster and faster, it seems inevitable that this kind of learning will happen in real time. It is possible to say that we will soon see robots that learn from their actions and use what they have learnt from these lessons in their next actions.[/vc_column_text][/vc_column][/vc_row][vc_row css=”.vc_custom_1629803910077{margin-bottom: 24px !important;}”][vc_column]
The most important example of this is the crime tendency prediction algorithm called KOMPAS. This algorithm was used to evaluate the tendency of prisoners to commit offences in the USA. Without any explanation, it was inferred that black people were twice as likely to commit offences. The biggest problem here was biased data.
In other words, the more successful, unbiased and high quality the inputs, the more successful the artificial intelligence will be.[/vc_column_text][/vc_column][/vc_row][vc_row css=”.vc_custom_1629803910077{margin-bottom: 24px !important;}”][vc_column]
In addition to contributing to design and technological development, making this feature mandatory as a security measure is important for a healthier work and cooperation.
In 2016, the “Artificial Intelligence Partnership to Benefit People and Society” was established by bringing together technology giants. This partnership focuses on deploying AI in ethical, fair and inclusive ways.
The Future Living Institute (FLI), a social welfare organization, has created the Asilomar AI Principles, which include basic laws and ethical guidelines for robotic designs.
It seems that there is no other option but to witness the development of artificial intelligence pushing our limits with fear and amazement every passing day.[/vc_column_text][/vc_column][/vc_row]