: Interesting business story about Apple/IBM


jlcinc
May 21st, 2003, 04:13 PM
Saw this and thought it might be of interest.

John

http://www.businessweek.com/technology/content/may2003/tc20030521_2871_PG2_tc056.htm

Tomac
May 21st, 2003, 05:41 PM
Remember to check out page one.

IBM says the new Apple chip will be of the 64-bit variety, which means it can process twice as much information per cycle as existing 32-bit chips.Mmmmm... good things cookin'.

jfpoole
May 21st, 2003, 06:31 PM
While it's strictly true that a 64-bit chip can process as much data per cycle as a 32-bit chip (all other things being equal, of course), a lot of the time applications won't take advantage of this fact.

Say, for example, I write a program that computes all of the primes less than a million. All of the numbers the program deals with can be expressed as 32-bit numbers. Running this program to a 64-bit platform won't yield any speed increase over running this program on a 32-bit platform.

Of course, if I write a program that manipulates 64-bit values, then there will be a speed increase if I run it on a 64-bit platform rather than a 32-bit platform.

The real advantage (as far as I'm concerned) with moving from a 32-bit platform to a 64-bit platform is the ability to address far more memory in a process (2^32 times more, actually). While this isn't a problem for most users, it is a problem for some, and it's the sort of thing that's going to only get worse as time goes on.

used to be jwoodget
May 21st, 2003, 07:31 PM
jfp,

Did coders consolidate/compile data segments in moving from 8 to 16 to 32 bit processors? In other words, is there a processing efficiency gain to be made by concatenating input data if the processor can handle more bits? If not, I'd imagine that the speed gains from moving up the bit tree would be significantly reduced (diminishing returns). But then, this doesn't seem to be the case in graphics chips (which are at 128 bit). I'm utterly ignorant of this in spite of reading the Ars Technica article on 64 bit processing. I understand the memory addressing gain and agree that this is a significant limitation of 32 bit chips.

(( p g ))
May 21st, 2003, 09:12 PM
If Apple drops Motorola for IBM it won't just be for a faster chip: it will be becasue Moto has become an unreliable supplier to the point where delays and an apparent lack of interest in developing a G5 have cost Apple a sizeable chunk of its market. I'd be pulling my hair out if my business relied on these guys as a sole-source for such an important component.

[ May 21, 2003, 11:11 PM: Message edited by: PGant ]

jfpoole
May 21st, 2003, 11:23 PM
jwoodget,

Developers will generally only re-write code to take advantage of new, wider registers if the code is manipulating data that is too wide to fit into exisitng registers.

Consider a graphics package that stores screen co-ordinates as 64-bit integers. On a 32-bit system, each co-ordinate would have to be split across multiple registers, and calculations on these co-ordinates would take longer to execute than equivalent calculations on 32-bit values. On a 64-bit system, each co-ordinate would fit in a single register, and calculations on these co-ordinates would be as fast as equivalent calculations on 32-bit values. Hence, moving the graphics package to a 64-bit platform would yield a significant speed increase.

Of course, if the screen co-ordinates are 32-bit integers, then there isn't an advantage to moving to a 64-bit platform. You can't cram two 32-bit integers into a 64-bit register and get correct results. This is where SIMD (single instruction, multiple data) instructions (like Altivec, MMX, and 3DNow!) come in handy; you can perform the same operation on multiple values which yields a hefty speed increase.

used to be jwoodget
May 22nd, 2003, 04:37 PM
Thanks jfp! Let's hope the adoption of optimized 64 bit programming is reasonably fast.

mycatsnameis
May 23rd, 2003, 10:14 AM
Thanks jfp! Let's hope the adoption of optimized 64 bit programming is reasonably fast.
Yes, faster than Altivec optimization and Cocoa-ization would be very nice :rolleyes: . Perhaps Quark will be able to roll that out before 2010.

jfpoole
May 23rd, 2003, 05:12 PM
Out of curiosity, why does it matter if an application is written in Carbon or Cocoa?

Chealion
May 24th, 2003, 12:09 AM
From a consumer perspective, not a bit. Although both Carbon and Cocoa have their pros and cons, working in either should be just fine, however Carbon would most likely be phased in about 5 years... However Carbon theoretically can run much faster the Cocoa.

jfpoole
May 24th, 2003, 01:10 PM
I've heard nothing that indicates that Carbon is going to be phased out at all. I'd imagine it's around for the long haul, especially since large parts of Cocoa are implemented using Carbon.

Chealion
May 24th, 2003, 01:37 PM
That was supposed to be 15... not 5... lol my bad... forgot the extra 10 years... Carbon will most likely die before Cocoa, but both will last for quite a while and work just fine. It ends up depending on the project you are working on.