Processor question

Discussion in 'Android General Discussions' started by ilikemoneygreen, Jul 1, 2010.

  1. ilikemoneygreen

    ilikemoneygreen Silver Member

    Joined:
    Apr 7, 2010
    Messages:
    2,578
    Likes Received:
    15
    Trophy Points:
    103
    Location:
    AZ, Superstition MTNs!
    Ratings:
    +15
    I just read http://www.droidforums.net/forum/droid-news/54992-first-dual-core-snapdragons-shipping-where.html and since my question isnt exactly news i fiqured id just go in here. :)
    My question is actually a couple things (im just very curious about the subject all together) 1. Is the Moto droid processor unique or something, Because of its ability to be overclocked? I know not all processors can be overclocked but is this design rare? 2. Can these duel core snap dragons be over clocked? 3. what exactly does duel core mean? does it mean that their is one processor that acts like two? can half of it be overclocked while the other half not be? if a 1.2ghz processor were to battle with a duel 1.2ghz processor who would win? what if the non duel processor were overclocked to 2.0ghz (hypothetically)And last question (i know i had alot) does a duel core 1.2 ghz processor mean it can go up to 2.4ghz all together?

    And if someone can answer that, they get a dancing bananna and a dancing droid with a "your awesome" statement. (who wouldnt want that!)
     
  2. Skull One

    Skull One Member

    Joined:
    Mar 11, 2010
    Messages:
    759
    Likes Received:
    6
    Trophy Points:
    18
    Ratings:
    +6
    Lets take a stab at this.

    1: No. See http://www.droidforums.net/forum/hacking-faqs/47871-overclocking-101-a.html for the answer.

    2: Overclocking can be done to anything that doesn't have a physical or software limitation put on it, that can't be circumvented. So we will have to wait to see if this design does.

    3: a. Dual Core means the the subsection of the CPU that deals with the actual instructions that are executed are duplicated. This is different than two fully separate CPUs because dual cores share a common set of memory controllers, along with a host of other auxiliary chip functions.
    b. No.
    c. It is possible to design it that way.
    d. That depends on how the applications is written. If the application is properly written multi-threaded code, then a dual core will always beat a single cpu. But if the code is not written for that, it should be a tie on average.
    e. That condition relies exactly on how the code was written and how the memory bus of both CPUs (single and dual core) are designed.
    f. No. The shared pieces of the dual core become the bottle neck.


    From a practical application standpoint lets look at why dual core CPUs, in theory, can not be 100% as fast as compared to two single CPUs of the same exact design and speed. And we will use a simple program to show the differences. NOTE: To all the coders out there, I know this example can be done simpler but that wouldn't help with the explanation.

    We have a piece of software that contains an array of data that needs to be fed to a math equation. The array will contain the numbers one thru four million. This will require about sixteen million bytes of memory due to the four bytes per array entry I have chosen as the programmer. The equation is result = array element * pi. We will then store the result back into the array element that was used for the equation. We are going to consider this code to be properly threaded by the language and compiler.

    With that in mind, the dual core CPU will load up the machine language code into each of the execution pipelines and then start requesting chunks of data from the section of the array that each part of the core is working on. So core one is taking data from one thru one million and code two takes one million and one thru two million. And there is the first issue. Core one's memory request and core two's are for two different parts of main memory. So the dual core CPU stops execution of the code in both cores, fetches core one's data first, puts it into the core one's internal memory segment and then says go. Then it fetches core two's data, stores it in core two's internal memory segment and then says go.

    Now if you had two separate single core CPUs, with two separate mother boards with two separate memory buses, they would be faster because the code would not be slowed down while one CPU requested different data than the other CPU.

    BUT, because there is always a but in complex examples, that isn't cost effective in making a phone much less a work horse PC. So they made motherboards with one memory setup, two independent CPUs and one shared memory controller that both CPUs had to talk to. It worked but there were slow downs because the CPUs had to talk to an outside memory controller and wait its turn in the memory request queue. The solution? Dual core CPUs. Because then the memory controller had more direct access to what was going on in each cores execution and the designers of the CPU could use branch and memory request prediction to pre-load the data needed for each of the cores.

    I want to stress this is a very over simplified example. But the theory is sound and I doubt you wanted to read for the next two hours on the "why" things work the way they do.
     
  3. takeshi

    takeshi Silver Member

    Joined:
    Nov 29, 2009
    Messages:
    4,581
    Likes Received:
    0
    Trophy Points:
    151
    Ratings:
    +0
    Not at all. Processors were overclocked for many many years before the Droid was ever even conceived.
     
  4. Darkseider

    Darkseider Senior Member

    Joined:
    Mar 12, 2010
    Messages:
    1,863
    Likes Received:
    0
    Trophy Points:
    66
    Ratings:
    +0
    Skull although your example is valid you forgot to include a multi-threaded example. For instance core 1 and core 2 spawn simultaneous threads executing the instructions to complete the job faster for each set of 1 million. So the memory is being shared across two cores with each core being clocked at "X" Mhz/Ghz. This would lead to faster completion of the task at hand. Although once the task is complete it has to then move onto the next 1 million and repeat the process. It can also spawn a mutli-threaded process computing each set of 1 million for each set simultaneously depending on the memory constraints and processors performance. Assuming the memory (L1 + L2 + Main memory) and processor was up to the task it could complete the entire job quite quickly. That is also using the assumption each data set being churned doesn't require the results of the previous to be started. WEEEE I am having fun here!
     
  5. Skull One

    Skull One Member

    Joined:
    Mar 11, 2010
    Messages:
    759
    Likes Received:
    6
    Trophy Points:
    18
    Ratings:
    +6
    LOL

    Hence the statement "Over simplified so you don't have to read for hours". Because I could write for the next day on the subject if I didn't have to work or eat :)
     
  6. christim

    christim DF Super Moderator Rescue Squad

    Joined:
    Jan 23, 2010
    Messages:
    5,100
    Likes Received:
    2
    Trophy Points:
    153
    Location:
    New England
    Ratings:
    +2
    I've saved the thread to see if ilikemoneygreen is going to swing by with that dancing banana and dancing droid or not.

    I'm not sure I want to see the non-over simplified example, but like a horrific accident scene I know I'd end up peeking:)

    Good reply, made sense to me. 2 gotta share, splitting the workload, as well as coordinate said sharing, which slows things down.
     
  7. ilikemoneygreen

    ilikemoneygreen Silver Member

    Joined:
    Apr 7, 2010
    Messages:
    2,578
    Likes Received:
    15
    Trophy Points:
    103
    Location:
    AZ, Superstition MTNs!
    Ratings:
    +15
    dancedroidLol, All of you are awesome! Thank you skull one, Very in depth. Your overclocking 101 explained alot as well.(those simple explainations are very helpful) And as for Darkseider... I wish i understood a sentence of that, lol, you sound very knowledgable.
    dancedroid:icon_ banana:And as promised the dancing bannana! :icon_ banana:dancedroid(its a party)

    (sorry for delay,i know its been a couple days, i try to come on the forums often but School work has made me lag in the fun free time field. :( )
     
  8. Leonius

    Leonius Member

    Joined:
    Jun 8, 2010
    Messages:
    51
    Likes Received:
    0
    Trophy Points:
    6
    Ratings:
    +0
    Something not mentioned is that even single threaded apps will be faster on a dual core because the spare core can handle the OS and other background tasks, while a single core has to do it all.
     
  9. shaggy3131

    shaggy3131 New Member

    Joined:
    Feb 21, 2010
    Messages:
    23
    Likes Received:
    0
    Trophy Points:
    1
    Ratings:
    +0
    Can anyone verify this?
    Don't we come back to the fact the the software (OS and App) have to be written in a way to take advantage of the dual cores and delegate themselves accordingly?
     
  10. ilikemoneygreen

    ilikemoneygreen Silver Member

    Joined:
    Apr 7, 2010
    Messages:
    2,578
    Likes Received:
    15
    Trophy Points:
    103
    Location:
    AZ, Superstition MTNs!
    Ratings:
    +15
    Wow, this is an old bump. I hope single threaded are faster.... i wish i knew enough to answer this.... It makes sense in my mind though. (the way Leonius explained)