Case Against The Singularity

Author: Swagnarok

Posts

Total: 7
Swagnarok
Swagnarok's avatar
Debates: 7
Posts: 1,250
3
2
6
Swagnarok's avatar
Swagnarok
3
2
6
Wikipedia defines The Singularity as "a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseen consequences for human civilization." Discourse on the topic often cites Artificial Intelligence (AI) as a probable cause of The Singularity. In this short-ish post, I aim to demonstrate why The Singularity won't happen anytime soon.

1. Energy Constraints
There is, ultimately, a hard entropic limit to the number of operations a computer can perform per second for a given amount of energy input. And that upper limit assumes conditions which cannot be realized by a normal, everyday computer, such as being stuffed into a black hole or existing in a constant low-Kelvin temperature state.
At this time, computer usage consumes 1 percent, or slightly over 1 percent, of the world's electricity consumption. A single ChatGPT-4 query will use between 1/1000th and 1/100th of a kilowatt hour of electricity, and at this time AI is still in an early phase of consumption, with systems like ChatGPT still being perceived largely as a novelty. But imagine, if you would, a world where such heavy data-crunching applications are used on a day-to-day basis by the average person around the world. Imagine, if you would, a 24/7 arms race between criminals and state hackers who use AI to crack cryptographic digital security layers and their would-be targets who add more and more layers to protect their property. To brute force AES-256 would take enough power to supply 50 million American households for billion of years; even assuming resourceful programmers managed to find shortcuts that trimmed this time down drastically, hacking would still be a quite expensive affair. In real life the most efficient operation to mine one Bitcoin will expend 155,000 kilowatt hours; at its peak Bitcoin mining took up more than 7% of Kazakhstan's electricity usage, despite being a fairly rich country of almost 20,000,000 people.
In short, imagine a world with exponential growth in demand for computing intensity, while electricity supply is growing at a far slower rate. After all, it takes years to commission one gas-fired power plant and even longer to bypass the hurdles to build a nuclear plant. Wind and solar entail buying up large properties in certain locations, and have their own issues, such as scaling up battery capacity. Something will eventually have to give.
The average voter, of course, won't tolerate 60% of local power consumption being siphoned away from their homes and toward such enterprises. So the human factor will further restrict the combined processing power of all computers, which makes the unlimited growth of The Singularity impossible.

2. Water Constraints
Related to the above, computers guzzle water. A lot of water. Cooling is used to raise computing efficiency and keep physical components from frying. Every 5-50 ChatGPT queries will use half a liter of water, and one Bitcoin transaction uses 16,000 liters of water. Water supply is arguably harder to amp up than electricity, as groundwater is finite and a desalination plant would take years to build. And again, the average person wouldn't tolerate half their municipal water supply being diverted from their homes to giant computing plants.

3. Other Constraints
Imagine a future where AI can churn out useful inventions by aggregating patented schematics. Beyond-human-control technological progress is a big part of the whole "Singularity" concept. Assuming that world governments didn't crack down on this in the name of intellectual property rights, there are still problems. An invention that's never built is useless, and to do so entails building physical supply chains and infrastructure. 3D printers are limited to working with certain materials and can only produce a certain range of results. Assuming human owners of these enterprises, it would remain within human control. Assuming that governments recognized the property rights of AI, it still wouldn't be outside human control unless all the work was automated as well.
The rate of advancement here would be slowed by physical constraints. It takes so much time to build a factory and build/install the prerequisite equipment. It takes so much money as a startup, and needs to turn a profit that might not materialize. A factory needs to bring in supplies through vehicles that cannot move faster than roadway speed limits. And so on.
Sidewalker
Sidewalker's avatar
Debates: 8
Posts: 2,669
3
2
5
Sidewalker's avatar
Sidewalker
3
2
5
The superintelligence is already here, and they call it...

...Sidewalker
ADreamOfLiberty
ADreamOfLiberty's avatar
Debates: 0
Posts: 4,164
3
2
2
ADreamOfLiberty's avatar
ADreamOfLiberty
3
2
2
-->
@Swagnarok
There is, ultimately, a hard entropic limit to the number of operations a computer can perform per second for a given amount of energy input.
This assumes that the reason we don't have true artificial intelligence yet is because we don't have enough operations per unit time.

Algorithm efficiency is something that many people in IT don't really understand or care about anymore (and it shows, our software is crap), they're only getting away with it because the hardware keeps getting better and the corporate world doesn't favor reusable and refined solutions as compared to bazzilions of patchware interpretive nonsense.

It also assumes that we're using energy efficiently and that we are using alot of energy on computing, which we aren't on both counts.

If we had low voltage/low clock rate parallel computing (like a GPU or a biological brain and what a neural net needs) in a cube 500 meters on each side we could power (probably with one nuclear plant) it and it would be very hard to argue it didn't have the computing power required for an efficient artificial intelligence to do some impressive things.


Imagine, if you would, a 24/7 arms race between criminals and state hackers who use AI to crack cryptographic digital security layers and their would-be targets who add more and more layers to protect their property. To brute force AES-256 would take enough power to supply 50 million American households for billion of years
The later fact is why there is no arms race and will never be an arms race. The advantage is so firmly on the party wishing to keep a secret that there is no point.


In real life the most efficient operation to mine one Bitcoin will expend 155,000 kilowatt hours; at its peak Bitcoin mining took up more than 7% of Kazakhstan's electricity usage, despite being a fairly rich country of almost 20,000,000 people.
BTC was designed to plateau. They wouldn't have used 7% of their energy if there was anything more worthwhile to do with it.



2. Water Constraints
Related to the above, computers guzzle water. A lot of water. Cooling is used to raise computing efficiency and keep physical components from frying. Every 5-50 ChatGPT queries will use half a liter of water, and one Bitcoin transaction uses 16,000 liters of water.
This is utter nonsense. Water is not destroyed by being used in a coolant loop (open or closed). It's not destroyed by evaporation. It's not destroyed when dumped into a river at a slightly warmer temperature.

One bitcoin transaction does not use 16,000 liters of water it doesn't use 1 mL of water. I've computed the confirmation hashes for thousands of bitcoin transactions on my home PC. Never given it a molecule of water.


3D printers are limited to working with certain materials and can only produce a certain range of results. Assuming human owners of these enterprises, it would remain within human control. Assuming that governments recognized the property rights of AI, it still wouldn't be outside human control unless all the work was automated as well.
The full production chain can be automated. That will be a singularity of sorts (people won't have to work anymore and the only long term questions will be population control and natural resource allocation).

zedvictor4
zedvictor4's avatar
Debates: 22
Posts: 12,062
3
3
6
zedvictor4's avatar
zedvictor4
3
3
6
-->
@Swagnarok
I would suggest that technological growth is already uncontrollable and irreversible.

In so much as material evolution and the role that humanity plays therein, is an inevitable consequence of the universal system.

Technology/AI will perhaps play a similar role in the future.


"Unforeseen", in so much as we cannot see into the future.


Nonetheless, we can be certain that what does happen, will happen.



WyIted
WyIted's avatar
Debates: 31
Posts: 5,455
3
4
9
WyIted's avatar
WyIted
3
4
9
You have a great point about the physical constraints of a singularity.

I do think that getting there does not rely on Moore's law and I do lack the technical jargon to talk about this in a way that makes sense but a lot of articles I read on artificial intelligence lately are showing similar advances as Moores law by modifying AI technology by rearranging parts to pack moreover a punch with less power. 

I also think whether we are working towards our own destruction or towards paradise we have an obligation to push towards the singularity by pushing for a kurzwelian merger and engaging in positive eugenics. 

The reason we want to make sure this generation handles the singularity an navigates it, is because despite how shitty we are the next generation is worse and would lack th Cojones to suffer in the ways necessary to avoid the apocalypse should things start going sideways. 

ebuc
ebuc's avatar
Debates: 0
Posts: 4,919
3
2
4
ebuc's avatar
ebuc
3
2
4
-->
@Swagnarok
Bucky Fuller predicted, that, if humanities 200 or so nations did not unify as global community by year 2000 it would be curtains for humanity.

My formula prognosticated apocalypse by 2015 and were 9 years beyond that.

Admiral Pickover who designed and implemented the first atomic submarine { The Nautlis } and he design and implemented the first nuclear power plant for generating electricity stated in vid that he believed nuclear power would lead to the end of humanity on Earth.

Oppenhemiers reservations to creating Hydrogen bombs did nothing to stop M.A.Destruction and the nutty path humanity was on til 1990 START treaty with bombers in the sky 24hrs a day ready to drop hydrogen bombs on another nation.

If we go back to Fuller, he uses analogy of of how far back you pull a bow string determines how far forward the arrow travels --assuming direction is proper angle--- i.e. if we want to see where humanity is headed, then the further into the past we go the better we can see the trajectory were on.

With Eienstiens general relativity ---proved by R Penrose in 1965---  a singularity { incomplete null geodesics } occurs in black hole.  Tho Roy Kerr { rotational black holes disagrees } and all black holes are rotational.

So human singularity means humanity future is an incompleted null geodesic i.e. vanishes into nothing-ness on Earth.

I think Fuller also states, it will be touch and go { whatever that means } all the way to the very end, assuming the end-of humanity is coming sooner rather than later.

I see two ways it can go:

1} all-for-one and one-for-all is a spirit of unity { cooperating to the end or not end sooner rather than later },  that is more spiritually fun pathway,

2} a dictator who forces unity on humanity, { much less cooperating path ], ergo much less fun way forward, and either way, the human sinularity on Earth may occur.  Bow and arrow --)--> trajectory ------> target ----> future -----> ? ? ?

So humanity came together once to step-back away from M.A.Destruction, yet, still enough hydrogen bombs to destroy humanity.


58 days later

ebuc
ebuc's avatar
Debates: 0
Posts: 4,919
3
2
4
ebuc's avatar
ebuc
3
2
4
-->
@Swagnarok
..." research that confirms AI's economic value, this time when used to speed up MRI data reconstruction.Mar 24, 2024 "...

The above is from the Imaging Wire and requires a subscription.  The above was also semi-known in in 2020 - 2023, so not old news.

A singularity of humanity is possible, unlike a singularity of a 3D, volumetric, occupied space existing in complement to time/motion/frequency/vibration etc.

Is singularity of humanity probable, or feasible?

..." Ben Goertzel, CEO of SingularityNET—who holds a Ph.D. from Temple University and has worked as a leader of Humanity+ and the Artificial General Intelligence Society—told Decrypt that he believes artificial general intelligence (AGI) is three to eight years away. AGI is the term for AI that can truly perform tasks just as well as humans, and it’s a prerequisite for the singularity soon following. "