Winamp & Shoutcast Forums

Winamp & Shoutcast Forums (http://forums.winamp.com/index.php)
-   Breaking News (http://forums.winamp.com/forumdisplay.php?f=80)
-   -   UNIX gets the boot! In favor if Lintel Machines (http://forums.winamp.com/showthread.php?t=145475)

zootm 26th August 2003 15:09

"kernel segfault ... resetting to factory default settings...

CRUSH... KILL... DESTROY..."

discoleo 26th August 2003 15:40

Nice picture! ;)

About vector chips:

Here is something from Cray:
Quote:

Today's supercomputer market is replete with "commodity clusters," products assembled from collections of servers or PCs. Clusters are adept at tackling small problems and large problems lacking complexity, but are inefficient at the most demanding, consequential challenges - especially those of industry. Climate research algorithms, for example, are unable to achieve high levels of performance on these computers.
The new Cray's are using vector chips, too. A vector chip is at lest 2 times as fast as the Itanium 2 (in real time applications it proves to be much faster; it is the fastest architecture yet; the earth simulator is the top computer for more than 1 year; I had to laugh at the supercomputer from the site you mentioned earlier: 471 GFLOPS; Earth Simulator has 35 thousand GFLOPS and to achieve comparable results with a cluster you need at least 50-100 thousand GFLOPS (and it could be even 1 or 2 orders of magnitude higher); clusters are simply not good for most applications; in usual applications they scale at most logarithmic: i.e. 10 processors make it at most 2 x faster than a monoP, and this is the optimistic approach).

What brings the future?

In the last 3-5 years, researchers were working on quantum computers. The current quantum computers perform a couple of simple operation on some 100 quantum bits (qbits). But this operations are performed at the theoretical limit of computation. Nothing can go beyond that (Heisenberg's uncertainty principle for energy states that dE x dt is greater than reduced Plancks constant and this is a fundamental limit(where dE =Energy difference; dt=time difference, NOT differentials) -> the time dt needed by a system to change its state is grater than (reduced h)/E, so nothing can go faster than this). So, the future is quantum computing.

Even the US military has recognized this, and the supercomputer designed for the DARPA should fill the gap between current computers and the future Quantum computers.

Comment: The DF-224 is NOT an x86 (however very old, from the 70's). Its mathematical functionality has been supplemented with a coprocessor based on 80386 arhitecture (NOT replaced). Later NASA replaced it with something based on 80486 arhitecture (Hubble was intended to function till 2010, not beyond and still 1 update mission is pending; on real space missions you can't make use of such updates). Nevertheless, it seems that NASA had a lot of trouble with its missions (including the loss of 2 mars probes; 1 was human fault, second unknown cause; the ESA hadn't had one; as I mentioned earlier one Arianne crashed because of invalid float to int transformation by its computer similar to the fisp16 and fisp32 bugs in PII).

x86 has bad flaws: the problem is the x86 debug registers DR0-7 are global for all processes and can cause a lot of problems. Read this on Guninski's security homepage. It's not only M$ bad, it's the x86 architecture that allows this. Also Pentium MMX bug, which is for corporations a seriuos bug.

Starbucks 26th August 2003 18:09

Dude, did you just skip every single little thing that I wrote or something?

Quote:

I had to laugh at the supercomputer from the site you mentioned earlier: 471 GFLOPS
Dude, when I posted this I did mention to you that this is by no means a comparison to Earth Simulator! I posted this because it has the record, and breaks records for performance/price in supercomputing! And it happened to be based on x86. Why is this important? I also mentioned that I am fully aware of Earth Simulator's 35,000 GFlop performance. What I said about it was that you can't say that Earth Simulator is better than anything x86 because ES was not designed to score high in a GFlop benchmark. Hell no, reguardless of what it can score in top500.org, It was designed to do something other than that! Top500.org, was just a reference, can you honestly draw conclusions that ES is better than any x86? No you can't! Because:

1. The supercomputers on top500.org were just entered as reference, not to benchmark.
2. Each supercomputer was built for a special application, to meet the buyers specification, it was not built to be the fastest I've been fully aware of all these supercomputers, the fact that they were built for an application voids it from comparison to other SCs.

Quote:

The DF-224 is NOT an x86 (however very old, from the 70's). Its mathematical functionality has been supplemented with a coprocessor based on 80386 arhitecture (NOT replaced).
1. I never said the DF-224 was an x86. Notice the slash between DF-224 and 80386.
2. It was Replaced/Converted/Added/Changed. NASA's words not mine. Whatever you want to call it.
Quote:

Hubble was intended to function till 2010, not beyond and still 1 update mission is pending
Just because NASA doesn't have plans to keep it operational after 2010, doesn't mean it's the CPU's fault. Did a NASA scientist happen to tell you, "Yeah were expecting Hubbles CPU to fail, we know it will at 2010, even though we've tested it's computer system, it worked flawlessly and it successfully sent us countless images throughout it's years of operation."
Quote:

1 was human fault, second unknown cause
So just because it was an unknown cause, you just blame the Intel chip inside the probe? Go ahead, what good does it do?
Quote:

Nevertheless, it seems that NASA had a lot of trouble with its missions
And you are trying to tell me that because NASA has problems, it's Intel's fault?

So your trying to say that Intel's chips aren't fit for space, then tell me has NASA, ever found a mission failure or any kind of failure resulting because of an Intel chip failing? Real document? And how many successful missions have NASA had powered by an Intel chip? Tell me about Hubble, how many hundreds of thousands of successful pictures have been beamed back to earth? Tell me about the mars probe that carried and guided the rover. Was that successful? What about the countless numbers of successful shuttle launches, missions? Even If you can bring me even 1 failure that proved to be the result of a failed Intel CPU, I can give you hundreds of successful operations done by Intel CPUs. This goes against what you said before, "it simply can't be used." If it simply can't be used then why is NASA using them??? I'm not reading you clearly, first you say it simply cant be used then you say it's used but it's no good, dispite the fact that shuttle is based on Intel, Hubble, Probes, Damn even the imaging systems in mission control are all Intel (and Linux just recently.)
Quote:

Systems affected:
Win2K, Win2K SP1
have not tested on Win2K SP2 but according to Microsoft SP2 fixes this
Even though it effects x86's, it's purely software based because this problem affects Windows. Not any other OS (Like Linux, Unix, Solaris). It's Microsoft's failure, because it can be fixed.
Quote:

Also Pentium MMX bug, which is for corporations a seriuos bug.
Wow, a bug. And SUN doesn't make mistakes I suppose? What does this mean? Intel's old ass MMX isn't perfect? And that makes SUN's newer US3 glitch any cleaner? So just because doesn't affect 1GHz and up doesn't mean it doesn't matter, even if has a patch, "Sun is likely to sustain more damage to its reputation than to its finances." My posting of this glitch is void, and your posting of the MMX glitch is also void from hereon.

Okay you want to talk performance, pure Intel vs SUN, then lets get the facts straight. From the start, we were talking about Intel and SUN CPUs. Stop bring up future chips or very very old chips as if it had anything to do with anything current! And you keep bring up Microsoft like it's our best friend even though you know full well that there is a full load of other OS/apps that will work on an x86. So you are trying to say that the UltraSPARC III is far superrior to anything built by Intel, even though it is not an x86-based Intel chip? If so, heres where your wrong, and can't prove otherwise:

1. Intel is faster than SPARC in Integer performance: Itanium2@1.5GHz: 1322/1322, even Xeon scores higher than SPARC in integer performance.

2. Intel also faster than SPARC in Floating Point:
Itanium2@1.5GHz: 2119/2119, UltraSPARC3Cu@1.2GHz: 1074/1344, Opteron246: 1209/1293

3. tpc.org, Intel owns the large majority of performance/price. TPC-C and TPC-H (Everything else is basicly the same.)

4. Professionals who use SUN now switched to Inte, the reason being, faster, cheaper. Can't argue with that.

5. NASA agrees too. Performance/Price is important to them. Majority also owned by Intel.

6. SUN sells Intel chips. Why? If I said that, I'd just be repeating myself.

7. You say that any x86's power consumption and heat doesn't even come close to SUN's, but it does. You specified "at least 2x or 3x or more" But according to SUN, US3 is 53-75 watts. (No were not comparing future chips or chips that are not out yet. Only current chips) Itanium 2 has a peak draw of 65 watts. Xeon is at 30-81 Max. I don't see that much of a difference. So power draw is around the same for current Intel chips and SUN chips.

You gotta give some benchmarks. Or you can keep asking SUN why they sell Intel's chips if UltraSPARCs are "so much faster". If you can't prove me wrong, (With real performance benchmarks) than I am right.;)

fwgx 30th August 2003 22:20

OK a few things. The Gemini chips are stripped down USIIi's, they're a lot more back to RISC design principles and they still won't be beating Intel or AMD anytime soon on pure processing power. The whole idea is to increase memory IO and making the most of the IO busses, not increasing processing power so much - but Sun realy need this too. The Niagra chips that are going to be essentially 4 Gemini chips on one core, take this a few steps further.

And whoever said US chips are optimised for Java is seriously taking the piss! Java is notorious for running slower on SPARC Solaris than it is on Wintel, although it has been getting better.

Starbucks 31st August 2003 03:37

discoleo, where are you?

Starbucks 10th September 2003 04:31

By the way, newer spacecrafts like Mars Surveyor use: "On-board computing uses a single Intel 80C85 2 MHz CPU with 100 KIPs, 64 kB RAM
main memory."

How's that for "unreliable".


All times are GMT. The time now is 10:18.

Copyright © 1999 - 2010 Nullsoft. All Rights Reserved.