pstarr wrote:Watson finally beat a Grand Master at chess in 1997. And Jeopardy (it original mandate) in 2011. This of course is no big deal. Nothing since.
So what about the game of "Go." I'd probably classify that a something since.
pstarr wrote:Watson finally beat a Grand Master at chess in 1997. And Jeopardy (it original mandate) in 2011. This of course is no big deal. Nothing since.
onlooker wrote:Well, how did animals evolve consciousness?
Well ordinarily, I would defer to you on this subject K, since you are the expert.
Deep learningDeep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, semi-supervised or unsupervised.
Deep learning models are loosely related to information processing and communication patterns in a biological nervous system, such as neural coding that attempts to define a relationship between various stimuli and associated neuronal responses in the brain.
Deep learning architectures such as deep neural networks, deep belief networks and recurrent neural networks have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics and drug design, where they have produced results comparable to and in some cases superior to human experts.
Applications
Drug discovery and toxicology
A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects. Research has explored use of deep learning to predict biomolecular target, off-target and toxic effects of environmental chemicals in nutrients, household products and drugs. AtomNet is a deep learning system for structure-based rational drug design. AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis.
Bioinformatics
An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships. In medical informatics, deep learning was used to predict sleep quality based on data from wearables and predictions of health complications from electronic health record data. Deep learning has also showed efficacy in healthcare.
Reliability of infrastructure systems
Natural disasters can have catastrophic impacts on the functionality of infrastructure systems and cause severe physical and socio-economic losses. Given budget constraints, it is crucial to optimize decisions regarding mitigation, preparedness, response, and recovery practices for these systems. This requires accurate and efficient means to evaluate the infrastructure system reliability. Deep neural networks have been used for accurate, efficient, and accelerated infrastructure system reliability analysis.
Automatic speech recognition
Large-scale automatic speech recognition is the first and most convincing successful case of deep learning.
Image recognition
Deep learning-based image recognition has become "superhuman", producing more accurate results than human contestants. This first occurred in 2011. Deep learning-trained vehicles now interpret 360° camera views. Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes.
Natural language processing
Neural networks have been used for implementing language models since the early 2000s. LSTM helped to improve machine translation and language modeling. Google Translate uses a large end-to-end long short-term memory network. GNMT uses an example-based machine translation method in which the system "learns from millions of examples." It translates "whole sentences at a time, rather than pieces. Google Translate supports over one hundred languages. The network encodes the "semantics of the sentence rather than simply memorizing phrase-to-phrase translations"
Image restoration
Deep learning has been successfully applied to inverse problems such as denoising, super-resolution, and inpainting. These applications include learning methods such "Shrinkage Fields for Effective Image Restoration" which trains on an image dataset, and Deep Image Prior, which trains on the image that needs restoration.
It's not just about faster and more processors. It's also about an abundance of digital data used to teach the machines instead of having everything hardcoded.pstarr wrote:Ultimately what makes ML different is the speed and abundance of fast processors. They can try any and all solutions-paths on the tree, and then find the average least stupid solution. ML algorithms are glorified curve fitting algorithms with some modern twists
What Is The Difference Between Artificial Intelligence And Machine Learning?As technology, and, importantly, our understanding of how our minds work, has progressed, our concept of what constitutes AI has changed. Rather than increasingly complex calculations, work in the field of AI concentrated on mimicking human decision making processes and carrying out tasks in ever more human ways.
Two important breakthroughs led to the emergence of Machine Learning as the vehicle which is driving AI development forward with the speed it currently has.
One of these was the realization – credited to Arthur Samuel in 1959 – that rather than teaching computers everything they need to know about the world and how to carry out tasks, it might be possible to teach them to learn for themselves.
The second, more recently, was the emergence of the internet, and the huge increase in the amount of digital information being generated, stored, and made available for analysis.
Once these innovations were in place, engineers realized that rather than teaching computers and machines how to do everything, it would be far more efficient to code them to think like human beings, and then plug them into the internet to give them access to all of the information in the world.
Much of the exciting progress that we have seen in recent years is thanks to the fundamental changes in how we envisage AI working, which have been brought about by ML.
There are two kinds of AI, and the difference is importantMost of today’s AI is designed to solve specific problems. Today’s artificial intelligence is certainly formidable. But if AI does steal your job, it won’t be because scientists have built a brain better than yours. At least, not across the board. Most of the advances in artificial intelligence have been focused on solving particular kinds of problems. This narrow artificial intelligence is great at specific tasks like recommending songs on Pandora or analyzing how safe your driving habits are. However, the kind of general artificial intelligence that would simulate a person is a long ways off.
“At the very beginning of AI there was a lot of discussion about more general approaches to AI, with aspirations to create systems…that would work on many different problems,” says John Laird, a computer scientist at the University of Michigan. “Over the last 50 years the evolution has been towards specialization.”
dohboi wrote:We're getting rather off topic, but since others, including mods, are taking part, I might was well pitch in too with my tuppence.
A friend of mine pointed out...how would we know if the 'singularity' has already past?
Would the whole global economic system re-gear itself to primarily support machines over people and the living planet?
But how would that be different than the modern industrial society we have now?
KaiserJeep wrote:
Be flexible and willing to learn, or live on the dole, or become homeless, those are the choices.
Return to Environment, Weather & Climate
Users browsing this forum: No registered users and 20 guests