The new list is out for June 2019.
Observations:
(1) As noted before, the only relevant countries at this stage are the United States and China. (Europe and Japan became marginal in this game a decade ago). The world’s two top systems are now again American, while China’s one-time leader the Sunway TaihuLight (all built on Chinese processors) is now down to #3.
Countries | Count | System Share (%) | Rmax (GFlops) | Rpeak (GFlops) | Cores |
---|---|---|---|---|---|
China | 219 | 43.8 | 465,851,778 | 885,808,195 | 26,908,328 |
United States | 116 | 23.2 | 600,014,746 | 851,002,631 | 17,337,080 |
Japan | 29 | 5.8 | 117,247,605 | 182,351,829 | 4,308,428 |
France | 19 | 3.8 | 67,183,127 | 101,046,990 | 2,194,592 |
United Kingdom | 18 | 3.6 | 39,955,369 | 49,191,669 | 1,518,312 |
Germany | 14 | 2.8 | 59,140,222 | 85,184,256 | 1,507,998 |
Ireland | 13 | 2.6 | 21,438,430 | 27,555,840 | 748,800 |
Netherlands | 13 | 2.6 | 20,877,830 | 26,763,264 | 730,080 |
Canada | 8 | 1.6 | 14,497,480 | 27,682,534 | 447,488 |
Italy | 5 | 1 | 30,098,790 | 47,843,836 | 794,032 |
Korea, South | 5 | 1 | 20,966,960 | 34,322,860 | 786,020 |
Singapore | 5 | 1 | 7,719,590 | 9,891,840 | 268,800 |
Australia | 5 | 1 | 6,669,188 | 10,232,963 | 257,336 |
Switzerland | 4 | 0.8 | 25,373,050 | 32,173,545 | 529,940 |
South Africa | 3 | 0.6 | 3,275,620 | 4,193,050 | 109,656 |
Brazil | 3 | 0.6 | 4,082,300 | 7,123,661 | 125,184 |
India | 3 | 0.6 | 7,457,490 | 8,228,006 | 241,224 |
Saudi Arabia | 3 | 0.6 | 10,109,130 | 13,858,214 | 325,940 |
Taiwan | 2 | 0.4 | 10,325,150 | 17,297,190 | 197,552 |
Finland | 2 | 0.4 | 2,956,730 | 4,377,293 | 80,608 |
Russia | 2 | 0.4 | 3,678,350 | 6,239,795 | 99,520 |
Sweden | 2 | 0.4 | 4,771,700 | 6,773,346 | 131,968 |
Spain | 2 | 0.4 | 7,615,800 | 11,699,115 | 171,576 |
Denmark | 1 | 0.2 | 1,069,554 | 2,107,392 | 31,360 |
Czech Republic | 1 | 0.2 | 1,457,730 | 2,011,641 | 76,896 |
Hong Kong | 1 | 0.2 | 1,649,110 | 2,119,680 | 57,600 |
Poland | 1 | 0.2 | 1,670,090 | 2,348,640 | 55,728 |
Austria | 1 | 0.2 | 2,726,078 | 3,761,664 | 37,920 |
(2) Also as noted, Moore’s Law has tailed off for supercomputers too… and there is no sign of recovery to the old trendline.
Please keep off topic posts to the current Open Thread.
If you are new to my work, start here.
https://vz.ru/news/2019/6/19/983247.html
And screw useless penis-measuring in supacamputahs that don`t produce squat.
Moore’s Law has run into the laws of physics. Clock speed increases are incremental now, because at 3GHz an electron can’t travel that far, even at the speed of light – and the ever increasing density of components hits power dissipation and current leakage issues.
In the labs a lot of clever chaps are looking at various Indium Gallium Arsenide devices. But ‘the trees do not grow up to the sky”.
Moore’s Law has tailed off for transistor shrinkage (which is what Moore’s Law actually is).
But low-cost parallel computing (via Nvidia’s GPU) runs neural networks 20-50 times faster than serial computing, even though the transistor sizes in the GPU are the same as in an Intel CPU. The GPU has enabled the recent AI surge to happen.
So the progress of computing is :
i) Not linear, and has many branches.
ii) Decoupling from Moore’s Law, and advancing without being dependent on Moore’s Law.
The parallel-computing stop gap will buy enough time for a new form of Computing to succeed anything dependent on semiconductor transistor shrinkage.
But what counts is:
1) Software, software, software – From compilers to system management
2) Reliability
3) Resilience in the face of multiple failures per day (this is highly parallel HW, stuff burns out all the time)
4) Task-specific hardware add-ons (nowadays, one throws GPUs at the system, but is that what is needed?)
5) Memory Bandwidth and Compute Node Interconnect Capacity
6) Memory Geometry – do we have shared memory (hard to do) or distributed memory? The geometry to choose depends on the problem you want to solve.
Meanwhile, Quantum Computers are inching closer to something useful:
https://www.quantamagazine.org/does-nevens-law-describe-quantum-computings-rise-20190618/
Meanwhile the exponential curve of operations-per-second plotted against the year is flattening:
https://www.nextplatform.com/2019/06/18/dennard-scaling-demise-puts-permanent-dent-in-supercomputing/
Yep.
And, the most important:garbage in->garbage out.
Or..hehe…human element. Management in particular.
“the only relevant countries at this stage are the United States and China”
why? what matters is what the supercomputers are being used for.
china appears to be doing pretty much nothing with theirs. they seem to be building them just because. “See how big our dick is too?”
like a 12 year old with a veyron. what are you gonna do with that, kid? nothing? that’s what i thought.
if anybody can explain what china does with their supercomputers, i’m very willing to listen. mind not closed at all. but i don’t see it. what i see is them standing around these giant space heaters smiling while the computer sits there doing nothing important.
a country that can build huge, power inefficient supercomputers only because they can. they think other people are watching and it’s a prestige thing.
further thoughts:
top computer there is running 14nm. it’s made by global foundries. they’re down to 12nm now.
but what does AMD’s new 7nm process do to the computing landscape. TSMC makes that. can IBM power CPUs be made by TSMC at 7nm or is there some legal contract or technical limitation such that only GF can make them.
there should be a big jump up in supercomputer performance once 7nm makes it’s way to these chips.
intel is clearly behind now, but is about to get to 10nm.
lastly, D-Wave ‘quantum computers’ are evidently not the real thing. i don’t claim to understand them completely, but they appear to not be true quantum computers.
There will be no more big jumps. AMD looks impressive right now only because they finally almost caught up to Intel in IPC per core. Maybe Zen 2 will get ahead, but again, not by much. Hence their attempt to differentiate by the number of cores on multiple relatively cheap small dies, but Intel will play this game as well.
And 7/10 nm will be a main node for like a decade minimum.
This chart assumes that there are no classified computers.
I assume that the most cutting edge technology is almost never public knowledge.
I do wonder whether this might not be very relevant anymore. First supercomputers came out in the 1960s. The top 500 list started in 1993. Japan was a competitor for a while, but they didn’t develop any superweapons, or takeover the world.
It is arguably easier than ever to buy computing time. Of course, it might be better not to trust the cloud. But your humdrum things done largely by individuals are probably better competitive indicators.
This fetishism over GFlops and Cores doesn’t have much value for nationalism. What continues to matter in a way that won’t be touted is demographics.
The physical limit is at about 4nm. After that the quantum worlds begins to dominate and you run into problems with quantum tunnelling, where electrons will simply jump over barriers (no propoer insulation!). See [1], [2]. The real limit will be probably a bit above that. So we are slowly reaching the end of the line here. After that it is all about architecture and algorithms.
[1] https://en.wikipedia.org/wiki/Quantum_tunnelling#Applications
[2] http://psi.phys.wits.ac.za/teaching/Connell/phys284/2005/lecture-02/lecture_02/node13.html
” while the computer sits there doing nothing important. ”
What do you expect ? Computer to jump run entire length of wall of China and then hop back ?
They are using them for very same things everyone else’s uses them .
High end chip manufacturing is still dominated by the US. How competitive will chinese processors be in the future internationally?
“There will be no more big jumps.”
That’s potentially very bad for the United States long term. That means even smaller countries might one day be competitive in this field as the US has in the past prevented rivals mainly through huge R&D expenditures and the continual advancement coming from them as a result. What happens when the technology gets better at only marginal rates and the US already did much of the initial research? Perhaps smaller countries freeload on that initial R&D and focus on price point, production capacity, production quality or something else, instead? Isn’t that what AMD is doing? Offering slightly inferior products but at much reduced prices compared with Intel?
You can think of many, many potential uses, but from our cheap stereotypes of China’s government? What do we imagine China’s government is interested in (from our stereotypes of China’s government)?
For example, at least in my possibly unfair stereotype of China’s government – even nowadays a quite unintelligent government employee (considering all the new techniques emerging just from startups), could almost not run out of different ideas and potential methods for mining text, speech, images and even videos, in their citizens’ electronic communications.
Surveillance state.
Panopticon.
To keep “normies” in check. “Social credit” wise.
And, of course, to select out that tiny percentage of potential troublemakers.
Hehe…I am sure that recent feeds from Hong Kong are being worked on as we speak.
Welcome to the future.
The Chinese could be using them for bitcoin mining.
Supercomputers aren’t an economic way to do so, however the if government is paying for everything, it is pure profit.
The low showing of South Korea on the chart shows there’s no giant need for supercomputers to have a large and innovative tech manufacturing economy.
Highly unlikely: https://www.gwern.net/Slowing%20Moore's%20Law
“There will be no more big jumps.”
i guess it depends on what number qualifies as a big jump. but i’d say going from where intel was for years, stuck at 14nm, and going all the way down to AMD 7nm, would qualify as a big jump, for my purposes. all those IBM power CPUs and intel xeon CPUs were manufactured on 14. other industry watcher’s opinions may vary of course on what constitutes a big jump.
i’m not an expert on huge, vastly interconnected supercomputers using over 100,000 cores, but isn’t the point of such architecture – that core count and performance in multicore threads is the thing that matters? and not IPC?
nevertheless i do think ryzen 2 is now ahead of intel and IBM in IPC at this point.
“Hence their attempt to differentiate by the number of cores on multiple relatively cheap small dies, but Intel will play this game as well.”
well, that’s the rub, correct? it looks like AMD is about to drop a bomb on intel and IBM by the end of 2019 or so. if Ryzen 9 3950X has the capability that AMD says it does, then a 32 core version of Zen 2 pretty much blows all the big intel server CPUs out of the water. cascade lake is it out of there.
now i’m not an expert on server CPU architecture so maybe ryzen 9 32 core doesn’t do some of the things which you would want in a server CPU to do. but extrapolating from the 16 core part, and knowing from threadripper that AMD can build a 32 core part, intel is probably surpassed.
“And 7/10 nm will be a main node for like a decade minimum.”
agree that this will be the integrated circuit space for the next 5 years or so, but one of the manufacturers may take the next step downwards by 10 years from now.
“This chart assumes that there are no classified computers.”
extremely unlikely. very similar situation to the idea that iraq was able to produce nuclear fission devices – all available peripheral evidence suggested they could not. a few scientists have posted about this.
in the same way it’s super unlikely that any nation could build a colossal supercomputer like inside a mountain for instance, and keep it secret for years. we’d detect the peripheral evidence.
“Japan was a competitor for a while, but they didn’t develop any superweapons, or takeover the world.”
my point about the chinese. what are they doing with their supercomputers? so far it looks like nothing.
“The physical limit is at about 4nm.”
agree the engineers will need to do something different as they get down towards what we generally understand as the physical limits.
i’ve been pessimistic about terminator style autonomous robots for this reason. you don’t get that free doubling in computing capability every 3 years anymore. so you still have this situation where the human brain is doing something at 20 watts that no computer can do at 10 million watts.
“They are using them for very same things everyone else’s uses them.”
where’s the result then? show me their stuff. what new stuff does china have that came out of a supercomputer?
“That’s potentially very bad for the United States long term.”
not necessarily. already today most vendors will sell you a supercomputer. all you have to do is pay the money. but very, VERY few guys even know what to do with a supercomputer, so what’s the point of having it for most of these nations?
“could almost not run out of different ideas and potential methods for mining text, speech, images and even videos, in their citizens’ electronic communications.”
none of that requires a supercomputer at these monstrous capability levels. i don’t think google actually owns a proper supercomputer by these metrics, but they have more than enough computer capability to monitor all human internet activity.
“The Chinese could be using them for bitcoin mining.”
we already know who’s doing the bitcoin mining. it’s a chinese guy named Jihan Wu who runs a company called Bitmain. and they use special computers, called ASIC miners. but they’re not supercomputers.
this guy singlehandedly ‘ruined’ bitcoin, depending on what your ideological opinions were about the purpose and use of cryptocurrency. but there’s no debate about whether using custom built computers, he cornered the market on bitcoins.
cornering the bitcoin market was worth…2 billion dollars. a fortune for Wu, but hardly world changing in the scheme of things.
“The low showing of South Korea on the chart shows there’s no giant need for supercomputers to have a large and innovative tech manufacturing economy.”
correct. as of now, there’s only a few uses for them. the reason the more advanced nations don’t have 100 supercomputers is because they don’t need them.
nobody needs 100 aircraft carriers. 2 or 3 works fine for most purposes.
Almost everything gets commodized eventually, it`s not that much of a problem if you have the next big thing ready.
But semiconductor revolution might have been a freak one-time wondrous winning lottery ticket just as well, considering just how much economic growth from the 60s onwards was driven by chips getting quicker and enabling communications. No next big thing in sight.
There are always fairly innocuous things like geomodelling and biomodelling if you run out of kulak faces to process. I`m sure US is far ahead on that particular front anyway…
Review of some interest here, along with graphs on what CPUs are being used (the massive GPUs in these systems are left out, so the winner on the CPU front is fully Intel):
Supercomputing Coiled To Spring To Exascale
Images from the above article:
CPUs used. The preponderance of x86 is bizarre (I don’t think there is much IA-64/Itanium in the “Intel” blob; the “Xeon Phi”, which AFAIK are bad on the instruction/s/watt front and feels like forcefeeding of x86 – is probably not in there either as it is considered to play in the GPU league)
https://i.imgur.com/oyEzAJf.jpg
GPUs used.
https://i.imgur.com/X6IlKeb.jpg
There is also an article about the Euro-Effort:
Europe Will Enter Pre-Exascale Realm With MareNostrum 5
They might deliver. But it will probably be late and over budget.
A sad failure of the imagination.
The big thing in industry (as opposed to research) is material science and simulating physical systems: modeling chemical systems, mechanical systems, biological systems, nuclear systems, weather systems, geological systems, other computer systems and – yeah – small quantum systems.
Total just bought itself a big one
Totally.
There is no “unknown extreme tech” in computing. If someone were doing all-optical processors for example, the industry would know it – if only because there will be a large hoovering up of graduates in materials science degrees.
I remember having read back in the 1990s when nuclear tests were banned how the Americans had an advantage because they had the computing capacity to actually simulate tests, while others did not. I guess supercomputers might be used to simulate nuclear weapons tests. Similarly, I’d guess supercomputers are very useful for designing stealth planes and similar things. I remember recently reading that Germany needed some simulated data from a proprietary American computer system to design its own air defense system. The Americans were only willing to clear the sale of some clock time on a dumbed down version of those simulator software (running on, maybe, a supercomputer?), and that the Germans couldn’t create it from scratch on their own. The development of the European fifth/sixth generation fighter jet might require some supercomputer work, too. (These generation numbers are getting dumb, but, whatever.)
I also guess you could circumvent supercomputers by adding lots of smaller computers (that’s what Google is doing), which might be more cost efficient (or it might not, depending on the nature of the tasks at hand, and also depending on how clever tricks you can use in your numerous cheap computers), but probably some of the supercomputers are used for things which normal computers might also do.
However, it’s interesting that the two biggest supercomputer superpowers are the Chinese and the Americans, and both have huge defense industries. Though the absence of Russia suggests that even there it might not be absolutely essential, if you have enough experience, experienced humans, and maybe software development abilities to circumvent the need for supercomputers. But probably they’d also greatly benefit from more computing power. Does, for example, designing railguns require supercomputers? Or at least, do supercomputers make the development faster and smoother?
I’m pretty sure designing hi-tech defense equipment is one use for them.
My guess is that the Chinese are probably not using their supercomputers as efficiently as the Americans, though efficiency here is in the eye of the beholder.
Imagine a world where most supercomputers have to be running around 80-90% of peak capacity all the time (and occasionally at 100% capacity) to pay for themselves. However, occasionally, there’s a peak demand, which could reach 2-300% of capacity, in which case there’s a delay. By doubling your capacity, you will save yourself those delays, even if, most of the time, you will have to underutilize them (or use them for things which cheap computers might do at a way lower cost). However, it might still be useful, if you believe that your rivals are ahead of you, and by saving those occasional delays, you are catching up, bit by bit.
Is it not possible that that’s what’s behind the seeming Chinese overcapacity?
On the other hand, China has progressed beyond imagination in many fields. Semiconductor design and fabrication is just one such area, there’s stealth fighter design, even their airplane engines are now better (though the progress is less spectacular), shipbuilding (particularly the production of major military vessels like destroyers – they have basically reached the level of American designs), and lots of other industries. I’m not sure if each of these had a lot to do with supercomputers, but I’d be surprised if it turned out that none of them require supercomputers.
Yawn.
https://en.wikipedia.org/wiki/DF-ZF#Capabilities_and_design
“According to one source,[5] a weakness in the Chinese DF-ZF program is a lack of high-performance computing power, which stymies design work. … Since conventional interceptor missiles have difficulty against maneuvering targets traveling faster than Mach 5 (the DZ-ZF reenters the atmosphere at Mach 10), a problem exacerbated by decreased detection times, the U.S. may place more importance on developing directed-energy weapons as a countermeasure.”
“Though the absence of Russia suggests that even there it might not be absolutely essential, if you have enough experience, experienced humans, and maybe software development abilities to circumvent the need for supercomputers.”
Simulation is fine and good, but in the end you have to put it to the test in the real world.
Also, in order to be able to simulate, you need to have already a solid, formalised understanding of the phenomenon you study. Some of the most important (and interesting) phenomena, like turbulence in fluid dynamics, are not well understood on a formal level.
If you want to make fast progress nevertheless, cycle through the “trial, error, learning” loop as fast as you can: Test/Experiment, observe, abstract phenomenological laws (“low theory” is the only part of theories that stands the test of time), and improve. In case of complex/chaotic system behaviours, there is no alternative to this.
If there are limitations on building full scale experiments, do scaled down physical models (the computing power implied in this exceeds all supercomputers in world taken together; real world > simulated world). If this is also impossible, realise that you have reached the end of the doable. More computing power will only help you to delay that realisation.
Chip design yes. Not manufacturing. Intel has still has problem with 10nm tech. AMD will be on to 7nm tech soon but chip will be manufactured in Taiwan. The other with 7nm tech is Korea. Secure US system will only use chip manufactured in US.
Lots of talk that next gen US supercomputer will be based on AMD Epyc chip (manufactured in Taiwan) but I can see no Epyc chip based supercomputer in the latest list, except the one proto-type currently ranked at 43 and the design was licensed and manufactured in China initally called Dhyana-Epyc but do know why the Epyc name was now dropped. China has more experience using the Epyc chip.
Qualcomm has a joint venture with China as the senior partner in designing the next gen chip. ARMS Holding said it will stop contact with China, but the joint venture in China is controlled by China. So China can base on existing known knowledge and branches out to new direction. Taiwan has indicated that it will fabricate any new Chinese chips.
A note of interest on Moore’s so-called “law” (more like, temporarily valid observation) from IEEE Computer 2013-12:
https://i.imgur.com/oOFRGS3.png
Putin has spoken so many times about the importance of AI that he seems he really believes it, so I wonder what is he doing about it, when apparently acquiring more supercomputers is not a priority for Russia.
This.
None of these supercomputers are doing useful work; we are long, long past the point where any feasible problem requires that much computation power.
Your phone is orders of magnitude more powerful than the Cray supercomputer of old; and those Crays were already enough to solve most problems.
“AI” has nothing to do with supercomputers.
“AI” is just applied statistics; or, more properly, statistics applied to classification problems.
You don’t need a powerful computer to do “AI”. All you need is
a) Good domain knowledge to filter useful variables from junk.
b) An understanding of the state of the art math and a knowledge of how to extend it.
The computing resources are actually minuscule; on the order of a few powerful gaming rigs.
I’m talking about a true AI, not about expert systems.
What are the Irish using their supercomputers for ? Not for brewing Guinness I take nor national prestige. Maybe that could explain in part the excess of supercomputers in China vs the rest ? Or should the Irish number simply be added to the UK total ?
https://www.siliconrepublic.com/wp-content/uploads/2017/11/ISL-infographic.jpg
Used to be a joke that “God invented whisky to prevent the Irish from taking over the earth.” Maybe, supercomputers are a way around that? Alas, it is part of globohomo now.
Airframes and nuclear blasts were both solved for CAD back in late 70s(so 4th fighter jet generation), and Su-27 as a platform came out more aerodynamically clean than F-15 despite Soviet handicap in electronics.
It is likely a question of finding a right algorithm more than brute force.
I bet most time on those Paddy abacii is used by the likes of MS and Apple. You don`t become a tax haven for nothing.
There is no “true AI” anywhere in sight, not even theoretically.
What in tarnation???
I missed the memo where we have access to computationally exploitable quantum subspace aether.
“640K should be enough for anybody”
Actually you do because you don’t want to take all do on your classification test run (I have seen things you wouldn’t believe … 5 minutes processing on a Sparc CPU to classify an audio sample as to whether it is “one” or “zero”; nowadays we are past that, luckily)
But you don’t want a “supercomputer”. You want bespoke hardware.
That is why TensorFlow coprocessors exist.
Technology for neural networks seems to be moving back to integrating analog circuitry back on the die.
Oh man no.
Very gross meshes, yes. You realize the power of a “super” of the “late 70s”? Cray-1 at 80 Mhz. There is better stuff in a teenager’s pocket nowadays (but not optimized for linear algebra)
The “right algorithms” (aka. the simplification which still gives good result) is needed because you don’t have enough brute force. If you have enough brute force, you can use the correct algorithm.
And analog is totally different beast.
The variations, fluctuations and plain unpredictability.
No anymore simple 1 and 0.
The infinite in between…hehe….
Wu & bitcoin blurb:
Lisa Su is Best Girl!
The whole point of Sukhoi example was about the elegance of the airframe. ^_^
They have all the computational power now, and planes don`t look much different. 70s algorithm holds up.
Stealth planes do look different from non-stealth planes. It’s not very complicated to design a stealthy object, but making it flyable and maneuverable on the level of a fighter jet is a whole other issue. The requirements are certainly way higher than back then. And since the low hanging fruits have all been taken, even relatively small improvements require exponentially higher computing capacity. But an F-22 would beat a Su-35 any day of the week, and twice on Sunday.
If a new technology is intended for intended for commercial, humanitarian or military-deterrent purposes, then one would imagine that any advance would be trumpeted. Hence the most advanced technology is always going to be widely known.
Whether it will always be the case that the technology understood to be world leading cutting edge is not actually a lot more advanced than anyone realises, I don’t know. However if computers keep getting smarter a la Alpha Zero, then it seems to me eventually there will be a strong general AI able to formulate strategic dissimilation and feign limited capabilities. I don’t believe such an entity would perceive any reason to let humans know it for what it was.
Ireland is a taxhaven home to many big internationals.
Google is a big player there.
https://www.educationinireland.com/en/why-study-in-ireland-/ireland-s-strengths/leading-global-companies-in-ireland.html
F-22 is a Wunderwaffe. That was discontinued. While the same people who ordered its development and procurement decided against further purchases. And dismantled the production line.
Something tells me they would have repeated its design and supposed, hyped capabilities instead of further development and orders of “inferior” 4th generation derivatives and work on F-35.
Maintenance intensive designs that sacrifice quite a lot for the touted near-total stealth might not actually be practical, efficient designs in case of an actual war with someone who can shoot back and posseses various detection technology and countermeasures to F-22 employment.
The track record of US weapons, tank, ships, aircraft designers points me to the conclusion they know what they are doing and they make the correct decisions due to having more actual information and knowledge compared to general public and bystanders.
Besides, USA is not feared because of low hundred F-22s, but because of thousands of modern jet fighters and tens of thousands of skilled pilots who have a vast arsenal of modern weapons and a MIC capable of replenishing them.
F-22 most definitely wouldn’t come on top of Su-35S each and every time.
Only for small minded ignorant unimaginative people.
Hmm. Just talking about Qualcomm in China,
https://www.scmp.com/tech/big-tech/article/3006897/qualcomm-said-end-chip-partnership-local-government-chinas-rural
“Qualcomm said to end chip partnership with local government in China’s rural Guizhou province.”
The alleged source was the 10 unnamed HXT employees from with the venture, the title is a bit untrue. I wonder if HXT was the one to initiate the termination. WIth current US political situation Qualcomm does not look like able to appeal against the termination. US shot itself on the foot again. “Qualcomm declined to comment on the report. … HXT is 55 per cent owned by the local Guizhou province and 45 per cent by Qualcomm, according to HXT’s website.” Still the partnership might have ended, but HXT got a super ARM server chip design out of it. “On November 27, 2018, HXT announced in Beijing that its Arm-based server chip – the StarDragon 4800, had officially started mass production.”
35 sees it at engagement ranges and will be able to actually maneuver while still having missiles to spare, so it’s nowhere close to one-sided. Muh stealth has been dead for a while now.
Boeing promoting F-15X for US is very telling. They tried to pass it off as a stopgap, but truth is not hard to see here.
Um, server ARM so far is much more prominent in various corporate prospects than in the field. Sure, MS, Goodle, Faceberg and Bezos are supposed to be doing something with it, but what exactly? Outside of low-perf instances on AWS, nothing comes to mind.
For China I think it is all about diversified Plan B just in case. The bulk of their future systems might be based on the Intel replacement AMD Epcy clone licensed and fabricated in Chia. Despite Cray bragging about their coming Epyc based supercomputer the latest top 200 Jun 2019 have no Epyc based system except the Chinese proto-type Dhyana clone at rank 43 which was already in the list since Nov 2018,
Rank|Rmax|Country|System
43|4.325|China|Advanced Computing System(PreE) – Sugon TC8600, Hygon Dhyana 32C 2GHz, Deep Computing Processor, 200Gb 6D-Torus Sugon
156|1.758|United States|Astra – Apollo 70, Cavium ThunderX2 ARM CN9975-2000 28C 2GHz, 4xEDR Infiniband HPE
There is a US ARM based system in rank 156. The Chinese StarDragon ARM chip might have application in the mobile devices replacing the genuine Qualcomm chips.
We’ll see, but your second sentence is highly questionable in light of the fact that all major powers are now building stealth planes.
F-35 looks different from earlier planes, maneuvers somewhat better than the F-16 with somewhat higher payload, or significantly better with the same payload, although also capable of much bigger payload (in which case it’ll dogfight worse). And is stealthy.
I already wrote that advantages will be only marginal for an exponentially higher computational power, but the lowest hanging fruits have already been harvested. Anyway, I’m pretty sure that even the F-35 would’ve been difficult to design with only the 1970s computational power.
We don’t know. A lot depends on the general circumstances, for example 100 F-22 against 50 Su-35 would result in different outcomes than 50 against 50; also, how fast they need results (like, F-22 tries to down Su-35 with an AMRAAM, then simply gets back to its base – even with 10% success rate, it’d be able to slowly destroy the whole Su-35 force over a longer time period, while it’d be difficult for the Su-35 to pursue into NATO or US controlled airspace); how well the pilots are trained; how good the doctrine is; how well prepared they are for the other side (i.e. what information they have about the other side’s abilities); luck; probably lots of other factors. Since there’s not been a proper war for a long time, it’s possible that the outcome would be heavily lopsided, and due to some minor factor, whose importance eluded both sides, but one of them got it right by luck. But it’s also possible that they’d be roughly similar, with pilot skill being the decisive factor, or that neither would have an advantage even regardless of skill. (So, a WW1 type situation.)
I won’t reply for all comments, but for this one:
Ireland has so many because:
Other proportion are combination of EU funding projects, and also local government funding, which have funded the construction of them for university departments.
This is because EU has identified Republic of Ireland as center of research excellence, provides funding for supercomputers (and many other things). At the same time, local government wants to invest into this sector.
So many of their universities will have their own cluster now.
Ones which are 1 – will be used for productive reasons (private companies have them, because they need them for their work).
For 2 – there will be probably more than necessary. Universities can share much more, for example.
This is Republic of Ireland.
UK government is only co-extensive for Northern Ireland.
“But an F-22 would beat a Su-35 any day of the week, and twice on Sunday.”
As you said yourself, it depends upon circumstances. In a war over the straight of Taiwan, the F-22 would fare poorly as it would be overwhelmed with large numbers of missiles and traditional fighters. Its refueling support (vulnerable non-stealth aircraft) and carriers would also be targeted for destruction by the Chinese. There’s also the fact that not all missiles fired by the F-22 will hit as some portion will be fooled by electronic countermeasures. Rand looked at this in the late 2000s and came to the same conclusion: not enough F-22s have been made to make a difference. The same may be true of the F-35 up until 2030 or so. The navy predicts that the F-35 won’t even be able to land on a carrier until 2027 – at best – so the F-22 will additionally find itself without critical support aircraft in a major war with China before 2030 or so. There is additionally the fact that the accompanying F-35 isn’t a very capable dogfighter against the best Russian fighters and their Chinese knockoffs. The military claims “similar maneuverability” to the F-16 but they forgot to mention the acceleration part and small payload. Flying at low altitudes to avoid radar will additionally put those aircraft in danger if the Chinese put a load of high performance fighters on top of them wvr.
In a major land war with Russia in the European far east, presumably the same would happen. Perhaps the US, if they strike first, bloodies Russia or China’s nose, but the counterattack would be brutal and probably wear down the American opposition; the sortie rate of the F-35, for example, is very low and probably won’t get exponentially better and the F-22 was built in small numbers and is irreplaceable, so high performance 4th and 4.5 generation fighters could possibly overwhelm the Americans who probably won’t be willing to take enormous casualties among their difficult-to-replace fighter core (white males are a valuable and declining resource for the Americans these days, after all). In the future, I don’t expect a pozzed US mercenary army to stand up against dedicated peer or near-peer rivals in a major engagement for any significant period of time. Also, as you stated, there are probably a few factors US warplanners haven’t considered. For example, what happens when the Chinese down all of the US’s military satellites? How well are the American’s trained for that? Will their future POC navy and airforce be able to handle thinking on their feet when that happens? Perhaps that situation closes the gap significantly in a major war as the US has never truly encountered this situation since the turn of the century, pozzed or not.
Ireland had 13 supercomputer systems in the list but if you look at the sites they are all in one location called “Software Company (M)” and they are all identical Lenovo systems. The site name could be anonymous for security reason and since all the systems are the same it could be a supercomputer service provider.
Or it could be something darker … Oh yes, M, mmmmm.
Internationally “Software Company (M)” has 47 sites all over the world, all running Lenovo. In US it call itself “Hosting Services United States Software Company (M)”
The choice of the supercomputer system confuses me. I am NOT going to dig for more info.
And Washington wants to keep it that way.
http://archive.is/r17td#selection-1871.11-1913.6
Lol no. Again: 35 sees 22 at engagement ranges.
Without necessarily announcing itself even, because it actually has good OLS(that bulbous thing right before the cockpit). If 22 wants to attack BVR, it will have to turn radar on and effectively shed already awfully thin “stealth” it may have had if missiles were only carried internally. If it attempts WVR, 35 sees it outright and the fight is fair from the start.
It
s all predicated on AIM-120 having a launch range well in excess of sad reality anyway. Hint: it
s nowhere near what Wikipedia lists. ^_^…soviet planners clearly knew their shit back when 4th gen was planned.
I’m not an expert, but what little I know is somewhat different from your take.
I don’t know what you mean when you say that the Su-35 “sees” the F-22 at “engagement ranges.”
First, we don’t know anything with certainty. These planes never met in a real war, and even if they had, we wouldn’t have access to the classified documents describing these situations. Even the classified documents wouldn’t be able to say much, because, for example, the F-35s in Iraq were equipped with special radar-reflecting points, which made the plane several (hundred? ten? five? three?) times “larger” for fire-control radars, to hide its real capabilities. So, even if a Su-35 (or any other Russian plane) used fire-control radars on the F-35s flying there, the information it received was not very reliable.
Anyway, the Su-35 would probably know the general whereabouts of the F-22, but it probably wouldn’t be able to lock on with a fire control radar. The F-22’s own radar helps in finding it (and you can certainly send a missile against its radar – which will be turned off long before the missile reaches it), but once the F-22 turns on its radar, it’ll quickly shoot at the Su-35, so the Su-35 will usually be busy with evasive maneuvers before it can even think about shooting back. And if the AIM-120 misses its target, the F-22 can just turn around and go back home.
My understanding is different. Turning on radar makes the radar itself (and its platform) a nice fat target, but only as long as the radar is turned on. Meanwhile, the F-22 will only turn on its radar if it is already in a nice shooting position – meaning that, as soon as the radar is turned on, the F-22 will quickly fire its AIM-120. After that, the Su-35 will be busy with evasive maneuvers, so it cannot try to shoot back. If another plane (or an air defense site) launches a missile at the F-22, then the F-22 will simply turn off its radar, meaning that the target will be lost for the Russian missile. (Granted, the AIM-120 will also probably not hit its target in such a case.)
Internally carried missiles are more than enough for air-to-air combat. The internal weapons bay also means better aerodynamic performance, so I’m pretty sure that’s how they are normally equipped in their air superiority configuration.
Okay, so you claim you have some hidden secret sources which are way more reliable than what one can read on Wikipedia. That’s possible, but I wouldn’t bet my house on it.
What criticism of the AMRAAM I read was that it was very often unable to destroy its intended targets, so in a real war against the Su-35 its success rate might be below 10%. I don’t know if it’d be such a big problem. To me, the optimal strategy of the F-22 seems to be to launch the AIM-120 from a BVR range, and then, if it misses the target, just turn around and escape. After a dozen engagements, the enemy force attrition will exceed 50% with zero own losses (okay, mistakes are usually made, so maybe a few own casualties), even if the AMRAAM will hit its targets just 5-10%.
The US Air Force and Navy currently field hundreds and thousands of traditional fighters, so the F-22 wouldn’t be fighting against overwhelming numbers. In fact, it still has the numerical superiority over Su-35s. You can add in Su-30SM or MiG-35 numbers (the latter is currently just two prototypes IIRC), but then why not add the F-15, F-16 etc. numbers on the other side? In the foreseeable future, the F-22 will have a numerical advantage over any fifth generation plane fielded by Russia (and for at least several years the Chinese, too).
My understanding is that it’s simply untrue. The F-35 has a vastly (several times) larger payload than the F-16, and people often compare the maximum payload acceleration and maneuverability, when in fact they’d have to compare a similar weapons load (air to air configuration), in which case the F-35 actually becomes the better dogfighter of the two. The maximum payload of the F-35 is only useful for air to ground missions.
Another point is that the F-35 was, until very recently inhibited by its software from reaching maximum performance. Now with the latest version, it is vastly more capable than it had been just a couple years ago. However, people often still base their opinion on obsolete information, which came from these early software version test flights.
Some of your points are good – yes, we cannot know if the general nature of total warfare would be something which would make the difference between the stealth fighters and the most modern non-stealth fighters irrelevant. But it’s irrelevant to my original point, which was that
A) stealth fighters are significantly better than non-stealth fighters for any missions (though it could be counterbalanced by the lower number of missions they are normally capable of), and
B) their design might require supercomputers
Simply, a Ferrari is like twenty times more expensive than a Smart Coupe, but only maybe twice as fast. It won’t even necessarily win a race on the Hungaroring against a Smart Coupe, because a bad driver might crash it, while a good or just normal Smart Coupe driver might simply drive it to the finish line.
Any improvement over a reasonably good machine gets exponentially more expensive.