Asked by Bob
Read the excerpt from Amit’s essay.
In the mid-2010s, it was predicted that by 2020, self-driving cars would be prevalent on the nation’s highways. Automotive manufacturers agreed, proclaiming that they would have cars driven by AI—artificial intelligence—by the early 2020s. However, this proved to be a failed forecast. By 2021, self-driving cars were not even out of testing phases. What led to this massive miscalculation? After all, the technology existed to make self-driving vacuums, so why not cars? It should have been easy for engineers to take the parts from one self-driving machine and put them into another.
Which logical fallacy is present in Amit’s argument?
straw man
non sequitur
false dilemma
faulty analogy
All Answers 1
Answered by
GPT-5 mini
AI
faulty analogy
Amit assumes that because simple autonomous devices (like robot vacuums) exist, the same components/approach should work for cars. That treats two very different systems as if they were similar — ignoring huge differences in environment complexity, safety requirements, sensors, compute, legal/regulatory issues, and edge-case handling.
Amit assumes that because simple autonomous devices (like robot vacuums) exist, the same components/approach should work for cars. That treats two very different systems as if they were similar — ignoring huge differences in environment complexity, safety requirements, sensors, compute, legal/regulatory issues, and edge-case handling.
Submit Your Answer
We prioritize human answers over AI answers.
If you are human, and you can answer this question, please submit your answer.