Donate Bitcoin

Donate Paypal


PeakOil is You

PeakOil is You

Making Tesla pt. 2

Discussions about the economic and financial ramifications of PEAK OIL

Re: Making Tesla pt. 2

Unread postby Outcast_Searcher » Fri 06 Oct 2017, 10:25:15

KaiserJeep wrote:GHung, I was never a network engineer. I built Tandem NonStop systems, aka fault tolerant transaction processing computers. For the last 25+ years they have run every stock and commodity exchange, every bank-by-wire system, every credit card authorization, cellphone billing, airline reservations, every single place you need continuous system availability or you lose money.

Those may run many or maybe even most such transactions, but certainly not all.

For example, I personally managed the (software for the) database systems that took care of the credit card systems for some large banks in the 2000-2007 timeframe. Those ran on IBM Z-series mainframes, running MVS. I built and maintained some systems built on IBM's parallel sysplex architecture -- a fault tolerant cluster of Z Series CPU's. Just ten to 15 years ago, those were quite rare, as customers didn't want to pay for the fault tolerance. Even for things like banking and credit card transactions, which I handled for customer banks who woudn't pay for fault tolerance (though they screamed about outages -- idiots).

IBM ran a lot of applications for a LOT of banks on their z-series machines, and from my experience, most of them had a big single point of failure if the hardware crashed or the OS or a major subsystem went bonkers (though that was becoming less frequent in the 00's, gradually).

These days I eat lunch weekly (to keep up) with friends at IBM who are in charge of the team that builds the encryption software for the add-in cards (crypto cards) used in IBM Z series computers for the banking industry. So I know for a fact that a LOT of banking transactions, credit card transactions, etc. are still run on IBM Z series computers. (Rebuilding all their application software to run on more modern server hardware would cost a LOT and be a big disruption. They don't want to pay for that. IBM makes a LOT of money on that legacy business, until it all gradually goes away).

There is a big difference between the claim that "these are really cool" or "these are the best" and "these run all transactions of type X, Y, Z, over the past 25 years". Kind of like for your claims that AGW via GHG's isn't real -- your intuition can be a huge distance from the objective reality.
Outcast_Searcher
COB
COB
 
Posts: 3862
Joined: Sat 27 Jun 2009, 20:26:42

Re: Making Tesla pt. 2

Unread postby pstarr » Fri 06 Oct 2017, 11:13:45

Kub, NVIDIA builds and designs graphic-processing units. These are not general-purpose CPU's/computers. They manipulate vast amounts of data, but have limited instruction sets and no capacity to load vast artificial intelligence programs. They don't do what general purpose computers are designed for.

Modern GPUs use most of their transistors to do calculations related to 3D computer graphics. They were initially used to accelerate the memory-intensive work of texture mapping and rendering polygons, later adding units to accelerate geometric calculations such as the rotation and translation of vertices into different coordinate systems. Recent developments in GPUs include support for programmable shaders which can manipulate vertices and textures with many of the same operations supported by CPUs, oversampling and interpolation techniques to reduce aliasing, and very high-precision color spaces. Because most of these computations involve matrix and vector operations, engineers and scientists have increasingly studied the use of GPUs for non-graphical calculations; they are especially suited to other embarrassingly parallel problems.

Handling graphics is the easy stuff. Visual data collected from the real world has to be 'considered', assigned value like a human. It's not a computer game, where outcomes are pre-programed. So in a computer game, if you shoot this (known) alien from that (known) vector the alien dies. Rather simply programming. The graphic processor and associated computer have a fixed map. The alien can only be in a fixed field, with a fixed obstacles. The real world is not like that

A CPU (as yet not even imagined) must categorize images of children and also basketballs, crumpled paper, dogs etc (all in different sizes, colors, patterns, densities, weights, emotional content) compare them on the fly and add a value judgment to their presence in the field. "don't hit that", "don't swerve to avoid that". That 10-year old Intel processor (the best there is) is not up to the job. Tesla should probably just wait for a quantum processor. Or a real neural network. Or an optical processor. Nothing like that really exists now, and perhaps never. For Tesla to even suggest this is neglegence (by omission) perhaps criminal and has already resulted in deaths.
Haven't you heard? I'm a doomer!
pstarr
NeoMaster
NeoMaster
 
Posts: 26076
Joined: Mon 27 Sep 2004, 02:00:00
Location: Behind the Redwood Curtain

Re: Making Tesla pt. 2

Unread postby KaiserJeep » Fri 06 Oct 2017, 12:20:31

OS, learn how things work. The IBM was front ending a NonStop computer which kept the corporate accounts. The exact flavor of computing is OLTP - OnLine Transaction Processing - which can only be done with fault tolerant hardware and a unique message-based operating system where the application executes in two CPUs simultaneously as a primary/backup process pair. The CPUs communicate over a dual redundant set of interprocessor busses, using a proprietary NonStop DynaBus protocol (originally) and dual redundant InfiniBand (latest systems).

NonStop had one competitor for a while in the unique application space I am talking about. The company was called Stratus Computers, Inc. and (entirely because they had ZERO hardware suitable for this application environment) IBM re-badged a Stratus as the IBM System/88, because we had stolen away a lot of banks and stock exchanges from them. But the Stratus lacked the middleware that the (then called) Tandem NonStop System excelled at, system integration between a System/88 and mainstream IBM hardware was slow and costly. The availability of IBM System/88 was less than a decade, then Stratus bellied up.

https://domino.research.ibm.com/tchjr/journalindex.nsf/d9f0a910ab8b637485256bc80066a393/cfc96f12d1fb70c385256bfa00685bd8!OpenDocument

Image

Today IBM bids NonStop hardware as part of a larger deal and HP produces such without logos on the cabinet, and IBM pays for installation and maintenance by subcontracting to HP NonStop Systems. Rumor has it that they have even attached IBM logos to the racks.

IBM is not the only computer manufacturer to suffer from - call it competitive angst. Dell computers runs the warehouse logistics system for the "build to order" PCs on a NonStop System. We use HP PCs as consoles on NonStop systems, and they tried substituting their own PCs, but experienced large software integration costs with every new console software CD - so they pried the HP logos off the consoles and attached Dell logos, and locked the computer room door. Dell never did care about the NonStop system itself:

Image

...because they did not sell anything remotely similar. But they did not want their customers to see HP PCs in their computer room.

The existence of HP NonStop Systems within IBM online applications has never been publicized, I have even had local Silicon Valley IBM employees refuse to believe it. Certainly, IBM never tells their customers, they sell the whole solution and actually subcontract to many venders. But the NonStop is the only database-of-record machine, as late as today.

The whole world literally runs on NonStop computers, but these systems sit quietly in secret locations in chilled rooms, counting money. Most computer professionals - who make a living fronting NonStop systems - are unfamiliar with them.
KaiserJeep 2.0, Neural Subnode 0010 0000 0001 0110 - 1001 0011 0011, Tertiary Adjunct to Unimatrix 0000 0000 0001

Resistance is Futile, YOU will be Assimilated.

Warning: Messages timestamped before April 1, 2016, 06:00 PST were posted by the unmodified human KaiserJeep 1.0
KaiserJeep
Fusion
Fusion
 
Posts: 4096
Joined: Tue 06 Aug 2013, 16:16:32
Location: California's Silly Valley

Re: Making Tesla pt. 2

Unread postby pstarr » Fri 06 Oct 2017, 12:35:35

KJ, is your post in response to mine? It doesn't sound like it. I thought I responded to Kub's contention that NVIDIA has built a super-computer capable of driving a care. I merely pointed out that is impossible.
Haven't you heard? I'm a doomer!
pstarr
NeoMaster
NeoMaster
 
Posts: 26076
Joined: Mon 27 Sep 2004, 02:00:00
Location: Behind the Redwood Curtain

Re: Making Tesla pt. 2

Unread postby KaiserJeep » Fri 06 Oct 2017, 13:27:58

No, I was replying to the OS post directly above yours, and had not even read your post as I was then typing mine. You are also talking about a different class of computer. The GPUs manufactured by HP are almost all used in the scientific instrument company spun off of HP called Agilent Technologies.
KaiserJeep 2.0, Neural Subnode 0010 0000 0001 0110 - 1001 0011 0011, Tertiary Adjunct to Unimatrix 0000 0000 0001

Resistance is Futile, YOU will be Assimilated.

Warning: Messages timestamped before April 1, 2016, 06:00 PST were posted by the unmodified human KaiserJeep 1.0
KaiserJeep
Fusion
Fusion
 
Posts: 4096
Joined: Tue 06 Aug 2013, 16:16:32
Location: California's Silly Valley

Re: Making Tesla pt. 2

Unread postby kublikhan » Fri 06 Oct 2017, 15:13:42

You're living in the past pstarr. GPUs are no longer confined to pushing pixels around your screen. Nvidia is the world leader in AI and machine learning systems with it's GPUs at the core.

Nvidia is riding high on its core technology, the graphics processing unit used in the machine-learning that powers the algorithms of Facebook and Google; partnerships with nearly every company keen on building self-driving cars; and freshly announced hardware deals with three of China’s biggest internet companies.

Nvidia will likely see competition in the near future. At least 15 public companies and startups are looking to capture the market for a “second wave” of AI chips, which promise faster performance with decreased energy consumption. Nvidia’s GPUs were originally developed to speed up graphics for gaming; the company then pivoted to machine learning. Competitors’ chips, however, are being custom-built for the purpose.

ARK predicts Nvidia will keep its technology ahead of the competition. Even disregarding the market advantage of capturing a strong initial customer base, Wang notes that the company is also continuing to increase the efficiency of GPU architecture at a rate fast enough to be competitive with new challengers. Nvidia has improved the efficiency of its GPU chips about 10x over the past four years.

Nvidia has also been investing since the mid-aughts in research to optimize how machine-learning frameworks, the software used to build AI programs, interact with the hardware, critical to ensuring efficiency. It currently supports every major machine-learning framework; Intel supports four, AMD supports two, Qualcomm supports two, and Google supports only Google’s.

Since GPUs aren’t specifically built for machine learning, they can also pull double-duty in a datacenter as video- or image-processing hardware. TPUs are custom-built for AI only, which means they’re inefficient at tasks like transcoding video into different qualities or formats. Nvidia CEO Jen-Hsun Huang told investors in August that “a GPU is basically a TPU that does a lot more.” “Until TPUs demonstrate an unambiguous lead over GPUs in independent tests, Nvidia should continue to dominate the deep-learning data center.”
Despite the hype, nobody is beating Nvidia in AI

The future is machine learning, and no machine learns as well as a graphics card.

During the course of 2016, Nvidia and AMD saw their stock prices skyrocket. Both companies are now trading at multiple times their value from a year ago, and the explanation lies in their massively expanded potential for future growth. All the AI hype that we heard during CES this past week is underpinned by a multitude of algorithms and mathematical calculations, and in its most sophisticated form it harnesses methods of machine learning and deep learning to evolve its awareness without being fed answers directly by a human. All of that new technology requires a lot of processing power, and it just so happens that AMD and Nvidia were already making the perfect processors for the task: graphics cards. GPU acceleration has fast become the standard for machine learning.

It’s not that Intel is oblivious to the expanding market being created by the move toward machine learning — the company has an entire aspirational website dedicated to the subject — but its chips are at a fundamental disadvantage and it hasn’t secured the customers or made the same sort of progress that its rivals already have.

Nvidia’s rise is also no surprise. The green graphics giant has been talking about deep learning and autonomous cars for at least three years at CES. It was bemusing at first, intriguing after a while, and now it’s turning into real-world self-driving vehicles thanks to a partnership with Audi. At the same time as Google and AMD were announcing their 2017 plans for Radeon-driven machine learning in the cloud, Nvidia and IBM revealed their own agreement to provide "the world’s fastest" deep learning enterprise solution. The next time a company offers you a cloud-based service of any kind — such as Google’s system for handwriting recognition in the new Chromebooks — odds are good that there’ll be a GPU farm somewhere churning through the mathematical tasks of making it happen.

But the most interesting dynamic that’s developed over the past year is how, ever so subtly and behind the scenes, AMD and Nvidia have essentially stolen Intel’s future away from it. Intel exists to satisfy our processing needs, but just as we discover a rich new vein of computational power needs, it turns out that Intel’s CPUs have already been surpassed by the basic architectural advantages of chips that were originally designed to push pixelated shoot-em-up targets around a monitor.

It’s a fun twist of fate for everyone outside Intel, and the good execution exhibited by both AMD and Nvidia so far also portends well for the speed of improvement in AI and machine learning capabilities. At a time when Intel is still scrambling to find mega-tasking scenarios for its chips, its GPU rivals are more concerned with how fast they can churn out the hardware to satisfy demand that looks set to only continue growing.
The demand for AI is helping Nvidia and AMD leapfrog Intel
The oil barrel is half-full.
User avatar
kublikhan
Fission
Fission
 
Posts: 3969
Joined: Tue 06 Nov 2007, 03:00:00
Location: Illinois

Re: Making Tesla pt. 2

Unread postby pstarr » Fri 06 Oct 2017, 15:51:23

There is no AI. Siri is a retard . . . oh excuse me, 'learning disabled' . . . and her children will be, at best, sub-grade morons. Never smart enough to navigate a dark street in a strange town.
Haven't you heard? I'm a doomer!
pstarr
NeoMaster
NeoMaster
 
Posts: 26076
Joined: Mon 27 Sep 2004, 02:00:00
Location: Behind the Redwood Curtain

Re: Making Tesla pt. 2

Unread postby kublikhan » Fri 06 Oct 2017, 16:37:53

Pstarr, have you been living under a rock these past 10 years? Improvements in AI and processing power have brought many improvements to AI in the last decade. Diagnosing cancer, allowing you to speak foreign languages, self driving cars, image recognition, speech recognition, the list goes on and on. Have you done any translating recently? 10 year old systems used to give you comically bad translations. Today, the translations are very readable and near human level:

Today, Quoc and his colleagues at Google rolled out a new translation system that uses massive amounts of data and increased processing power to build more accurate translations. The new system, a deep learning model known as neural machine translation, effectively trains itself—and reduces translation errors by up to 87%. “This demonstrates like never before the power of neural machine translation.”
Google’s new translation software is powered by brainlike artificial intelligence

You can even get headphones that hook into the service to do real time language translation right into your ears or use an app to speak in a foreign language:
Google has launched its first pair of wireless headphones featuring real time language translation from Google Translate. The headphones, called Google Pixel Buds, connect to an Android or Google Pixel smartphone, connecting to the voice-controlled Google Assistant to make phone calls, play music or even understand other languages.

The translation software allows users to both listen to and speak in foreign languages using their smartphone. For listening services, holding down the earbud will translate another language into the user's chosen language. They can also use the Google Assistant to speak other languages using the Google Translate app. By pressing the earbud and saying "let me speak Italian", users will be able to talk in English and their smartphone speakers will automatically translate into Italian.
Google’s new headphones can translate foreign languages in real time

Then there are the advances in image recognition. The same four companies all have features that let you search or automatically organize collections of photos with no identifying tags. You can ask to be shown, say, all the ones that have dogs in them, or snow, or even something fairly abstract like hugs. The companies all have prototypes in the works that generate sentence-long descriptions for the photos in seconds.

Think about that. To gather up dog pictures, the app must identify anything from a Chihuahua to a German shepherd and not be tripped up if the pup is upside down or partially obscured, at the right of the frame or the left, in fog or snow, sun or shade. At the same time it needs to exclude wolves and cats. Using pixels alone. How is that possible?

The advances in image recognition extend far beyond cool social apps. Medical startups claim they’ll soon be able to use computers to read X-rays, MRIs, and CT scans more rapidly and accurately than radiologists, to diagnose cancer earlier and less invasively, and to accelerate the search for life-saving pharmaceuticals. Better image recognition is crucial to unleashing improvements in robotics, autonomous drones, and, of course, self-driving cars—a development so momentous that we made it a cover story in June.

But what most people don’t realize is that all these breakthroughs are, in essence, the same breakthrough. They’ve all been made possible by a family of artificial intelligence (AI) techniques popularly known as deep learning, though most scientists still prefer to call them by their original academic designation: deep neural networks.

The most remarkable thing about neural nets is that no human being has programmed a computer to perform any of the stunts described above. In fact, no human could. Programmers have, rather, fed the computer a learning algorithm, exposed it to terabytes of data—hundreds of thousands of images or years’ worth of speech samples—to train it, and have then allowed the computer to figure out for itself how to recognize the desired objects, words, or sentences. In short, such computers can now teach themselves. “You essentially have software writing software,” says Jen-Hsun Huang, CEO of graphics processing leader Nvidia, which began placing a massive bet on deep learning about five years ago.

Neural nets aren’t new. The concept dates back to the 1950s, and many of the key algorithmic breakthroughs occurred in the 1980s and 1990s. What’s changed is that today computer scientists have finally harnessed both the vast computational power and the enormous storehouses of data—images, video, audio, and text files strewn across the Internet—that, it turns out, are essential to making neural nets work well. “This is deep learning’s Cambrian explosion.”

Venture capitalists, who didn’t even know what deep learning was five years ago, today are wary of startups that don’t have it. “We’re now living in an age,” Chen observes, “where it’s going to be mandatory for people building sophisticated software applications.” People will soon demand, he says, “ ‘Where’s your natural-language processing version?’ ‘How do I talk to your app? Because I don’t want to have to click through menus.’ ”
WHY DEEP LEARNING IS SUDDENLY CHANGING YOUR LIFE

Artificial intelligence (AI) is about more than just the promise of a robot butler — it can actually save lives. AI’s contribution to the healthcare industry and in medical research could be hugely significant. IBM sees that and wants Watson, its AI technology, at the forefront of this development. Human experts at the University of North Carolina School of Medicine tested Watson by having the AI analyze 1,000 cancer diagnoses. In 99 percent of the cases, Watson was able to recommend treatment plans that matched actual suggestions from oncologists. Not only that, but because it can read and digest thousands of documents in minutes, Watson found treatment options human doctors missed in 30 percent of the cases. The AI’s processing power allowed it to take into account all of the research papers or clinical trials that the human oncologists might not have read at the time of diagnosis. IBM is also working with medical lab company Quest Diagnostics to provide gene sequencing matched with diagnostic analysis courtesy of Watson, which would be made available as a cloud service oncologists could access. “This is the broad commercialization of Watson in oncology.”
IBM’s Watson AI Recommends Same Treatment as Doctors in 99% of Cancer Cases
The oil barrel is half-full.
User avatar
kublikhan
Fission
Fission
 
Posts: 3969
Joined: Tue 06 Nov 2007, 03:00:00
Location: Illinois

Re: Making Tesla pt. 2

Unread postby pstarr » Fri 06 Oct 2017, 16:42:18

Kub, tell it to Siri. Maybe she will understand you.

I can't even get the cable company to understand a simple yes and no. I have to take my hands off the wheel and tap the keyboard Then I lose my eyeglasses and have to go digging around down by the brakes and gas pedal. What a mess! yikes
Haven't you heard? I'm a doomer!
pstarr
NeoMaster
NeoMaster
 
Posts: 26076
Joined: Mon 27 Sep 2004, 02:00:00
Location: Behind the Redwood Curtain

Re: Making Tesla pt. 2

Unread postby asg70 » Fri 06 Oct 2017, 18:28:23

pstarr wrote:Tesla should probably just wait for a quantum processor. Or a real neural network. Or an optical processor. Nothing like that really exists now, and perhaps never. For Tesla to even suggest this is neglegence (by omission) perhaps criminal and has already resulted in deaths.


PStarr, you don't know what the hell you are talking about. When was the last time you were directly involved with computer programming? The late 80s? Why is it you feel qualified, then, to place odds on whether autonomy will or won't crack this nut? You're NOT working on these problems. You do NOT keep up with the associated literature. Your only interest is to be a knee-jerk naysayer. Your opinions are therefore worthless. Completely and utterly worthless.

For my own entertainment value I've been watching some Youtube clips on the history of computing and then veered into all the engineering work that went into the Apollo program that got us to the moon in less than 10 years. Your attitude drives me up the wall because it is defeatist. All problems are dead-ends to you. There is no room for innovation in your world-view. Wherever we are technologically represents as far as humanity is ever going to go. It's only downhill from here, folks! If pessimistic attitudes like this were the norm then we'd never have left the stone-age.

Here, if you can stop your addiction to being a naysayer for 45 f*cking minutes, watch this.

https://www.youtube.com/watch?v=mucb4Ttt1oY

Take note of how many screwups took place during the development of the Saturn V. There were LOTS of them, including careless ones. The reason we got to the moon and innovated in all sorts of areas in the process (like chip development) is because people did not simply throw their hands up the way you do and say "it can't be done"!

You seem to hold engineers with contempt, but they have a unique skill. When they fail, they learn from that failure, and they keep trying. They have grit. Tenacity. Incredible focus and work-ethic. Doomers don't. Doomers celebrate failure. It's shadenfreude. It's all too easy. It's lazy, and it's tailor made for the internet where everyone's an armchair critic. I'm not saying everybody should be an engineer, but the degree of luddite disrespect served up towards the kind of people responsible for giving you the very means to heap this disrespect on them is appalling. Biting the hand that feeds you, basically.

I see your attitude as not just annoying, but altogether toxic to society. We need to develop a can-do attitude about a whole lot of things. It is not "magical thinking". It is a matter of nurturing our inherent problem-solving skills in all areas.
Hubbert's curve, meet S-curve: https://www.youtube.com/watch?v=2b3ttqYDwF0
asg70
Intermediate Crude
Intermediate Crude
 
Posts: 911
Joined: Sun 05 Feb 2017, 13:17:28

Re: Making Tesla pt. 2

Unread postby pstarr » Fri 06 Oct 2017, 19:00:23

Asgy, you'd be surprised at how little microprocessor technology has really changed. Yes, they have become smaller, denser and thus faster. But its still the same serial instruction-processing machine. Read a byte, decode a byte, process a byte. Repeat.

That's in spite of Intel's parallel-processing marketing claims. Only special applications take advantage of parallel processing . . . and AI is not one of them.

The form, design, and implementation of CPUs have changed over the course of their history, but their fundamental operation remains almost unchanged. Principal components of a CPU include the arithmetic logic unit (ALU) that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that orchestrates the fetching (from memory) and execution of instructions by directing the coordinated operations of the ALU, registers and other components.

It's not my attitude that is the problem, it marketing BS by Apple, Intel, Tesla et.al. Notice how Goggle has shut up regarding its much vaunted AI car program. They basically gave up after admitting that the project is impossible. Without continuous road mapping. A single broken stop sign breaks the system.
Haven't you heard? I'm a doomer!
pstarr
NeoMaster
NeoMaster
 
Posts: 26076
Joined: Mon 27 Sep 2004, 02:00:00
Location: Behind the Redwood Curtain

Re: Making Tesla pt. 2

Unread postby kublikhan » Fri 06 Oct 2017, 19:21:21

And that is why I said you are living in the past pstarr. A GPU is a parallel architecture. Parrallel computing is used heavily in deep learning. It's like your computer knowledge is several decades out of date.

GPU vs CPU Performance
A simple way to understand the difference between a GPU and a CPU is to compare how they process tasks. A CPU consists of a few cores optimized for sequential serial processing while a GPU has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously. GPUs have thousands of cores to process parallel workloads efficiently.
Video: GPU vs CPU
WHAT IS GPU-ACCELERATED COMPUTING?

Although machine learning has been around for decades, two relatively recent trends have sparked widespread use of machine learning: the availability of massive amounts of training data, and powerful and efficient parallel computing provided by GPU computing. GPUs are used to train these deep neural networks using far larger training sets, in an order of magnitude less time, using far less datacenter infrastructure. GPUs are also being used to run these trained machine learning models to do classification and prediction in the cloud, supporting far more data volume and throughput with less power and infrastructure.

Early adopters of GPU accelerators for machine learning include many of the largest web and social media companies, along with top tier research institutions in data science and machine learning. With thousands of computational cores and 10-100x application throughput compared to CPUs alone, GPUs have become the processor of choice for processing big data for data scientists.
MACHINE LEARNING
The oil barrel is half-full.
User avatar
kublikhan
Fission
Fission
 
Posts: 3969
Joined: Tue 06 Nov 2007, 03:00:00
Location: Illinois

Re: Making Tesla pt. 2

Unread postby pstarr » Fri 06 Oct 2017, 19:37:03

kublikhan wrote:And that is why I said you are living in the past pstarr. A GPU is a parallel architecture. Parrallel computing is used heavily in deep learning. It's like your computer knowledge is several decades out of date.


Deep Learning Is Still A No-Show In Gartner 2016 Hype Cycle For Emerging Technologies

For the 22nd year, Gartner has released its much-discussed hype cycle report on emerging technologies, “providing a cross-industry perspective on the technologies and trends that business strategists, chief innovation officers, R&D leaders, entrepreneurs, global market developers and emerging-technology teams should consider in developing emerging-technology portfolios.”

Reacting to last year’s hype cycle report (see below), I made the following comment:

    Machine learning is making its first appearance on the chart this year, but already past the peak of inflated expectations. A glaring omission here is “deep learning,” the new label for and the new generation of machine learning, and one of the most hyped emerging technologies of the past couple of years.

Image
Haven't you heard? I'm a doomer!
pstarr
NeoMaster
NeoMaster
 
Posts: 26076
Joined: Mon 27 Sep 2004, 02:00:00
Location: Behind the Redwood Curtain

Re: Making Tesla pt. 2

Unread postby kublikhan » Fri 06 Oct 2017, 20:17:12

Pstarr, why are you using second hand sources of old data when you can go directly to Gartner and get current data directly from the horse's mouth?
Complementary emerging technologies such as machine learning, blockchain, drones (commercial UAVs), software-defined security and brain-computer interfaces have moved significantly along the Hype Cycle since 2016.

3 Trends Appear in the Gartner Hype Cycle for Emerging Technologies:

AI Everywhere
Deep Learning
Deep Reinforcement Learning
Artificial General Intelligence
Autonomous Vehicles
Cognitive Computing
Commercial UAVs(Drones)

Artificial Intelligence (AI) Everywhere
Consider the potential impact of AI-enabled autonomous vehicles. They could reduce accidents, improve traffic, and even slow urbanization as people can use travel time and won’t need to live near city centers. “When autonomous vehicles, AI, IoT and other emerging technologies are combined with economic trends like the sharing economy, we truly see different business designs that profoundly disrupt the market,” Walker says. Uber is a prime example of how a business is fundamentally shifting an industry dominated by private vehicles to potentially upending the industry with transportation as a service.

AI technologies will be the most disruptive class of technologies over the next 10 years due to radical computational power, near-endless amounts of data and unprecedented advances in deep neural networks. These will enable organizations with AI technologies to harness data in order to adapt to new situations and solve problems that no one has ever encountered previously.”

Also in the realm of AI, machine learning, one of the hottest concepts in technology, has the potential to benefit industries from supply chain to drug research. It will soon become impossible for conventional engineering solutions to handle the increasing amounts of available data. Machine learning offers the ability to extract certain knowledge and patterns from a series of observations.
Top Trends in the Gartner Hype Cycle for Emerging Technologies, 2017

So Gartner thinks AI will be the most disruptive class of technologies over the next 10 years? It thinks machine learning is the hottest concept in technology? And yet you use this as a source to backup your opinion that it's all a con. That the technology has stagnated for the last decade. There is no real progress. I think you might have to reexamine your core beliefs in this area because Gartner(and everyone else) is saying the exact opposite of what you are saying.

And BTW, google did not give up on it's self driving car program. Infact it may be launching a commercial self driving car service later this year:

If there's a frontrunner in the race to deliver a fully self-driving car—no steering wheel, pedals, or human required—it's Waymo. The Alphabet-owned unit based in Mountain View, California, has driven 3 million miles on public roads since 2010. And since April, it has been whisking Arizonans around Phoenix in its cars, as it prepares for a commercial launch on a yet-to-be announced timeline.
WITH INTEL’S CHIPS, GOOGLE COULD AT LAST DELIVER SELF-DRIVING CARS

Google's self-driving car unit prepares to launch a taxi service near Phoenix. Two anonymous sources have told Efrati that Google's self-driving car unit, Waymo, is preparing to launch "a commercial ride-sharing service powered by self-driving vehicles with no human 'safety' drivers as soon as this fall." Obviously, there's no guarantee that Waymo will hit this ambitious target. But it's a sign that Waymo believes its technology is very close to being ready for commercial use. And it suggests that Waymo is likely to introduce a fully driverless car network in 2018 if it doesn't do so in the remaining months of 2017.

According to Efrati, Waymo's service is likely to launch first in Chandler, a Phoenix suburb where Waymo has done extensive testing. Waymo chose the Phoenix area for its favorable weather, its wide, well-maintained streets, and the relative lack of pedestrians. Another important factor was the legal climate. Arizona has some of the nation's most permissive laws regarding self-driving vehicles.

According to the Arizona Republic, a 2015 executive order from Gov. Doug Ducey "allows universities to test vehicles with no driver on board so long as a licensed driver has responsibility for the cars and can take control remotely if the vehicle needs assistance." Waymo is getting ready to take the same approach. The company has built a real-time command center that allows self-driving cars to "phone home" and consult human operators about the best way to deal with situations it finds confusing. The ability to remotely monitor vehicles and give timely feedback on tricky situations will be essential if Waymo hopes to eliminate the human driver from its cars.

Most of Waymo's rivals are aiming to release self-driving cars in 2020, 2021, or later. Even if Waymo's schedule slips a few months and it introduces a self-driving car service in the middle of 2018 instead of late 2017, that will still give the company a multiple-year head start over most of its rivals. And it would confound skeptics who insist that full self-driving technology is still years away.
Fully driverless cars could be months away
The oil barrel is half-full.
User avatar
kublikhan
Fission
Fission
 
Posts: 3969
Joined: Tue 06 Nov 2007, 03:00:00
Location: Illinois

Re: Making Tesla pt. 2

Unread postby asg70 » Fri 06 Oct 2017, 20:23:57

The reason he's putting his foot in his mouth is he is NOT interested in a genuine intellectual discussion and simply wants to blindly naysay any and all technological innovation. Innovation delays doom and hence delays the pain and suffering he is waiting to see rain down on those who, in his own words, failed to heed his warnings. Those people are, of course, "rich and spoiled" first worlders who live in the suburbs and sip lattes, especially holier than thou Tesla drivers.

Image
Hubbert's curve, meet S-curve: https://www.youtube.com/watch?v=2b3ttqYDwF0
asg70
Intermediate Crude
Intermediate Crude
 
Posts: 911
Joined: Sun 05 Feb 2017, 13:17:28

Previous

Return to Economics & Finance

Who is online

Users browsing this forum: No registered users and 24 guests