Will PC-DOS run faster on 4 or 8 core modern machines?












9















When I run PC-DOS on my 4 core AMD Phenom chip, does it take advantage of the extra parallel CPU's? If not, is there a way to coax DOS to use all available CPU's or does this require specific developer programming at the assembly or C compilation time?










share|improve this question

























  • Apparently there were Multi-user DOS (not derived from MS-DOS available). en.wikipedia.org/wiki/Multiuser_DOS#Multiuser_DOS

    – Thorbjørn Ravn Andersen
    4 hours ago











  • You could certainly run DOS multiples times in parallel in VMs, assigning a dedicated CPU to each of them

    – Thomas Weller
    4 hours ago











  • See also: superuser.com/questions/726348/…

    – traal
    3 hours ago
















9















When I run PC-DOS on my 4 core AMD Phenom chip, does it take advantage of the extra parallel CPU's? If not, is there a way to coax DOS to use all available CPU's or does this require specific developer programming at the assembly or C compilation time?










share|improve this question

























  • Apparently there were Multi-user DOS (not derived from MS-DOS available). en.wikipedia.org/wiki/Multiuser_DOS#Multiuser_DOS

    – Thorbjørn Ravn Andersen
    4 hours ago











  • You could certainly run DOS multiples times in parallel in VMs, assigning a dedicated CPU to each of them

    – Thomas Weller
    4 hours ago











  • See also: superuser.com/questions/726348/…

    – traal
    3 hours ago














9












9








9








When I run PC-DOS on my 4 core AMD Phenom chip, does it take advantage of the extra parallel CPU's? If not, is there a way to coax DOS to use all available CPU's or does this require specific developer programming at the assembly or C compilation time?










share|improve this question
















When I run PC-DOS on my 4 core AMD Phenom chip, does it take advantage of the extra parallel CPU's? If not, is there a way to coax DOS to use all available CPU's or does this require specific developer programming at the assembly or C compilation time?







assembly cpu dos






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 8 hours ago







jwzumwalt

















asked 14 hours ago









jwzumwaltjwzumwalt

1,87741534




1,87741534













  • Apparently there were Multi-user DOS (not derived from MS-DOS available). en.wikipedia.org/wiki/Multiuser_DOS#Multiuser_DOS

    – Thorbjørn Ravn Andersen
    4 hours ago











  • You could certainly run DOS multiples times in parallel in VMs, assigning a dedicated CPU to each of them

    – Thomas Weller
    4 hours ago











  • See also: superuser.com/questions/726348/…

    – traal
    3 hours ago



















  • Apparently there were Multi-user DOS (not derived from MS-DOS available). en.wikipedia.org/wiki/Multiuser_DOS#Multiuser_DOS

    – Thorbjørn Ravn Andersen
    4 hours ago











  • You could certainly run DOS multiples times in parallel in VMs, assigning a dedicated CPU to each of them

    – Thomas Weller
    4 hours ago











  • See also: superuser.com/questions/726348/…

    – traal
    3 hours ago

















Apparently there were Multi-user DOS (not derived from MS-DOS available). en.wikipedia.org/wiki/Multiuser_DOS#Multiuser_DOS

– Thorbjørn Ravn Andersen
4 hours ago





Apparently there were Multi-user DOS (not derived from MS-DOS available). en.wikipedia.org/wiki/Multiuser_DOS#Multiuser_DOS

– Thorbjørn Ravn Andersen
4 hours ago













You could certainly run DOS multiples times in parallel in VMs, assigning a dedicated CPU to each of them

– Thomas Weller
4 hours ago





You could certainly run DOS multiples times in parallel in VMs, assigning a dedicated CPU to each of them

– Thomas Weller
4 hours ago













See also: superuser.com/questions/726348/…

– traal
3 hours ago





See also: superuser.com/questions/726348/…

– traal
3 hours ago










4 Answers
4






active

oldest

votes


















28














No, DOS won't use any additional CPU (*1) ever.



(Though it might run faster due them being faster)



Quite the same way as DOS doesn't take advantage of the extended memory or additional instructions.



DOS is a




  • Single CPU

  • Single User

  • Single Task

  • Single Program

  • Real Mode

  • 8086


operating system.



Even through it got a few extensions over time to tap a bit into newer developments, like




  • A20 Handler for HMA usage

  • Utilities for extended memory usage like HIMEM.SYS or EMM386

  • Usage of certain 286 instructions


it never left its original setup. That's what Unix, OS/2 and Windows NT were meant for.



Similar goes for third party extensions like background utilities (SideKick) or process swappers/switchers (DoubleDos et.al.), or the eventually furthest reaching combination of all, Windows (up to 98). They all added layers to manage DOS for some increased functionality, but didn't (and couldn't) change its basic workings.



Of course, application programs can use additional features of a machine, no matter if it's new CPU instructions, a new graphics card, more memory, or in this case an additional CPU. Much like DOS-extenders allowed protected mode programs to run as DOS applications, most notably maybe DOS/4GW, which got huge popularity due being delivered with Watcom compilers.



Then again, I do not know of any extender allowing the use of concurrent CPUs.



It's worth noting that none of these additions - including the application specific DOS-extenders - were changing the basic paradigm of DOS as a single-user, single-task operating system.





*1 - What to call it? CPU, core, hyperthread, socket? Well, in ye good old times of simple microprocessors we couldn't care less. Nowadays it gets blurry. With multiple processing units of various quality it gets hard to qualify. For most purposes the professional world continues to call the device (IC) a CPU, but avoids the term when it comes to system building and talks now about sockets, leaving open how many processing units the IC plugged in will have - or if at all. Often the term core is used instead, but thanks to Intel using this as well as a brand name, it got blurred again. So for all RC.SE purposes I stay with the term CPU, as vague as it is, and qualify if needed.






share|improve this answer





















  • 1





    Probably digressive: I know that Windows ME reputedly provided only a very cut-down version of DOS; is that the reason that you draw the line at Windows 98 as seeking to provide an evolved DOS? This question is primarily motivated by my lack of awareness of what ME took away; I just have the vaguest sense that it was still the 9x kernel but no longer attempting to provide a workable DOS environment. So presumably whatever's there is just what it wasn't feasible to remove? Please ignore if not relevant!

    – Tommy
    13 hours ago






  • 2





    @Tommy Well, I did think about writing ME, but then ME had already, as you note, not only extended and replaced some DOS functionality for windows programs, but also disabled DOS use in itself. Prior to ME, DOS booted as usual and then Win got started as an application. With ME the DOS kernal directly started into Win without checking config.sys or autoexec.bat, so no drivers loaded, no programs executed. Just the DOS kernel and then Win. Also, Win couldn't terminate back to DOS - as there was none. Runing DOS needed a modified Win and a reboot. It's debatable, but I'll rather exclude it.

    – Raffzahn
    12 hours ago






  • 1





    @Tommy I guess I need to qualify. When saying "With ME the DOS kernal directly started into Win" and "Win couldn't terminate back to DOS - as there was none", then its due the fact that Win got started not on top of (the resident part) of Command.com, but instead. That's why there was no base to operate on, even if one would have manged to end Windows with returning to DOS. So with this important step missing (on the way up as well down), the base of ME was no longer even a minimum usable DOS as we knew from before.

    – Raffzahn
    11 hours ago






  • 7





    Regarding caches, it’s possible to use cache as RAM on current Intel CPUs (that’s how they boot, before they’ve trained the memory); I have an entry on my never-ending to-do list which involves looking into booting DOS inside cache, with no RAM at all...

    – Stephen Kitt
    9 hours ago






  • 2





    @StephenKitt Ooohhhhh ... Shiney .... when ever you get close to play with that, give us a note, so we can watch - or even participate.

    – Raffzahn
    9 hours ago





















10














If by IBM DOS you mean IBM PC DOS, which was a rebrand/derivate of MS-DOS, then the answer is no - DOS will only ever support a single core. HyperThreading and multiple cores is simply not supported by DOS.



Making DOS use multiple cores would be a major operation. Firstly DOS would have to support multitasking. It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking. Then it would need to be developed to support SMP (symmetric multiprocessing). Both of these tasks would be major undertakings - their peers in the 90s took years to get reliable and efficient multiprocessing support.



It is possible that a single application could take advantage of multiple cores if it used DOS as a bootstrap and then kicked DOS out, and essentially became its own operating system.



If you're looking for a 90s-era operating system that supports multiple cores, then you would be looking at Linux (2.0 or later), Windows NT 4 or OS/2 Warp.






share|improve this answer



















  • 1





    AFAIK there were only two versions of OS/2 with support for multiple CPUs, OS/2 2.11 SMP and OS/2 Warp 4 AS SMP, and both are very rare.

    – Stephen Kitt
    14 hours ago











  • "It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking" <- why? shouldn't it be enough to give each "core" something to do and wait until this something says "hey, I finished"?

    – Felix Palmen
    13 hours ago






  • 1





    Please feel free to edit my post - I assumed pre-emptive multitasking as a given, but that assumption may not stand up to scrutiny, or at least may be disputed :-)

    – Richard Downer
    10 hours ago






  • 1





    @RichardDowner no, I agree with you. The question is about DOS. It is about running existing DOS software. Existing DOS software very often assumes exclusive ownership of hardware. To run multiple applications at once without each experiencing a whole bunch of surprises, you need to be able to intercede upon hardware accesses. That sort of intercession is called pre-empting. Saying "oh, but you could have offered this service, and then brand new software could have been written to use it" is completely orthogonal to the question posed.

    – Tommy
    10 hours ago






  • 2





    @FelixPalmen nobody is arguing that "preemption is a must for SMP". They're arguing that preemption is a must for finding a way to do anything useful with multiple cores within the confines of running MS-DOS software.

    – Tommy
    10 hours ago



















1














The answers above are correct about the core usage, but it would still be faster than old machines because the clock (MHz/GHz) of the CPU is faster. This is actually problematic on many of the old games because things happen faster than you can react.



If you wanted to test/play without committing to a wipe/load scenario, you could always try FreeDOS on an emulator or on an old hard drive.






share|improve this answer








New contributor




UnhandledExcepSean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 4





    Actually, the canonical example of speed causing problems wasn't, IMHO, games, but rather Turbo Pascal. There was a bug in certain versions of Turbo Pascal where executables had a timing loop at the beginning and on machines of a certain speed (I think I first saw this with Pentium, but I'm not 100% sure), they would get a runtime error due to, I think, an overflow of an integer variable. Fortunately, someone came up with a patch (I'm sure I still have it here somewhere) that bypassed that code - I would run that as part of the compile process.

    – manassehkatz
    12 hours ago








  • 1





    @manassehkatz The first time I experienced "too fast" was Tetris that originally I had for a 286. When I installed it on a 386, I had to turn off the turbo button for it to be remotely playable. By my 486, it was unplayable :P

    – UnhandledExcepSean
    12 hours ago













  • Games should, inherently, understand the speed issue and deal with it - even though many did not. Ordinary applications shouldn't have to. One of the complications is that IBM (or maybe it was Microsoft, not sure whose idea it was) came up with the 18.2 ticks per second - when plenty of other systems already used 60 ticks per second. That is fine for typical application timing purposes but way too slow for interactive games - so every game came with its own method of timing and many of those methods really failed past a certain CPU speed (or simply didn't adapt at all and assumed 4.77 Mhz).

    – manassehkatz
    12 hours ago








  • 2





    For the record, I had to use the Turbo Pascal patch program in order to run Turbo Pascal software on a Pentium 200 (non-MMX) if memory serves. But only in real DOS, not when running inside Windows 95. So the speed of execution that causes its startup code to attempt a divide by zero must be somewhere close below that of an unencumbered P200.

    – Tommy
    11 hours ago






  • 1





    @manassehkatz, 18.2 Hz was 1/65536th of 1/12th of 4 times the 4.77 MHz NTSC color subcarrier frequency. Fortunately, the timer could be adjusted to produce other frequencies.

    – Mark
    6 hours ago





















0














DOS itself won't do anything to boot up the "extra" cores in a multicore system, running only on the boot CPU.



A program that does that is normally called an operating system. You could certainly have a program that takes over from DOS. Maybe even one that saves the previous DOS state and could exit back to DOS. But nobody's written such a thing. You can probably find bootloaders that will load Linux from DOS, though. It would involve taking over the interrupt table and so on.



https://stackoverflow.com/questions/980999/what-does-multicore-assembly-language-look-like includes an answer with some details on what you'd need to do on x86 to bring up other cores to send inter-processor interrupts (IPIs) that bring up other cores.





CPUs with fewer cores tend to have higher guaranteed clock speeds, because each core has a higher power budget. But CPUs that support "turbo" typically have a high single-core turbo clock they can use when other cores are idle.





Modern Intel CPUs with more cores also tend to have more L3 cache; a single core can benefit from all the L3 cache on the whole die. (Not on other CPUs in a multi-socket system, though).



If you're using a DOS extender that allows a DOS program to access more than 1MiB of RAM (e.g. running in 32-bit protected mode), then it might actually benefit from having more than the 3MiB of L3 cache that a low-end dual-core system has.



But otherwise DOS can't use more memory than the smallest L3 caches on modern mainstream x86 CPUs. (Or on Skylake-X, the per-core L2 caches are 1MiB, up from 256kiB, so other than unacheable I/O / device / VGA memory, everything would be a cache hit with 13 cycle latency, much faster than ~45 cycle latency L3 in a dual or quad core!)



But there's a downside to having more cores on Intel CPUs: L3 cache latency. They put a slice of L3 along with each core, and cores + L3 are connected by a ring bus. So more cores means more hops on the ring bus on average to get to the right slice. (This is reportedly even worse on Skylake-X, where a mesh connects cores. It's odd because a mesh should mean fewer hops.)



This extra latency also affects DRAM access, so single-core memory bandwidth is better on dual / quad core desktop CPUs than on big many-core Xeons. Why is Skylake so much better than Broadwell-E for single-threaded memory throughput?. Even though the Xeon has quad-channel or 6-channel memory controllers, a single core can't saturate them and actually has worse bandwidth than a single core of the same clock speed on a quad-core part. Bandwidth is limited by max_concurrency / latency.



(Of course this doesn't apply to L2 cache hits, and 256kiB L2 is a good fraction of the total memory that DOS programs can use without a DOS extender. And 2, 4, or 8 MiB of L3 cache is pretty huge by DOS standards.)





AMD Ryzen is different: it uses "core clusters" which each group of 4 cores share an L3 cache. More total cores won't give you more L3 that a single core can benefit from. But within a cluster, L3 latency is fixed and pretty good.






share|improve this answer























    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "648"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9082%2fwill-pc-dos-run-faster-on-4-or-8-core-modern-machines%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    4 Answers
    4






    active

    oldest

    votes








    4 Answers
    4






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    28














    No, DOS won't use any additional CPU (*1) ever.



    (Though it might run faster due them being faster)



    Quite the same way as DOS doesn't take advantage of the extended memory or additional instructions.



    DOS is a




    • Single CPU

    • Single User

    • Single Task

    • Single Program

    • Real Mode

    • 8086


    operating system.



    Even through it got a few extensions over time to tap a bit into newer developments, like




    • A20 Handler for HMA usage

    • Utilities for extended memory usage like HIMEM.SYS or EMM386

    • Usage of certain 286 instructions


    it never left its original setup. That's what Unix, OS/2 and Windows NT were meant for.



    Similar goes for third party extensions like background utilities (SideKick) or process swappers/switchers (DoubleDos et.al.), or the eventually furthest reaching combination of all, Windows (up to 98). They all added layers to manage DOS for some increased functionality, but didn't (and couldn't) change its basic workings.



    Of course, application programs can use additional features of a machine, no matter if it's new CPU instructions, a new graphics card, more memory, or in this case an additional CPU. Much like DOS-extenders allowed protected mode programs to run as DOS applications, most notably maybe DOS/4GW, which got huge popularity due being delivered with Watcom compilers.



    Then again, I do not know of any extender allowing the use of concurrent CPUs.



    It's worth noting that none of these additions - including the application specific DOS-extenders - were changing the basic paradigm of DOS as a single-user, single-task operating system.





    *1 - What to call it? CPU, core, hyperthread, socket? Well, in ye good old times of simple microprocessors we couldn't care less. Nowadays it gets blurry. With multiple processing units of various quality it gets hard to qualify. For most purposes the professional world continues to call the device (IC) a CPU, but avoids the term when it comes to system building and talks now about sockets, leaving open how many processing units the IC plugged in will have - or if at all. Often the term core is used instead, but thanks to Intel using this as well as a brand name, it got blurred again. So for all RC.SE purposes I stay with the term CPU, as vague as it is, and qualify if needed.






    share|improve this answer





















    • 1





      Probably digressive: I know that Windows ME reputedly provided only a very cut-down version of DOS; is that the reason that you draw the line at Windows 98 as seeking to provide an evolved DOS? This question is primarily motivated by my lack of awareness of what ME took away; I just have the vaguest sense that it was still the 9x kernel but no longer attempting to provide a workable DOS environment. So presumably whatever's there is just what it wasn't feasible to remove? Please ignore if not relevant!

      – Tommy
      13 hours ago






    • 2





      @Tommy Well, I did think about writing ME, but then ME had already, as you note, not only extended and replaced some DOS functionality for windows programs, but also disabled DOS use in itself. Prior to ME, DOS booted as usual and then Win got started as an application. With ME the DOS kernal directly started into Win without checking config.sys or autoexec.bat, so no drivers loaded, no programs executed. Just the DOS kernel and then Win. Also, Win couldn't terminate back to DOS - as there was none. Runing DOS needed a modified Win and a reboot. It's debatable, but I'll rather exclude it.

      – Raffzahn
      12 hours ago






    • 1





      @Tommy I guess I need to qualify. When saying "With ME the DOS kernal directly started into Win" and "Win couldn't terminate back to DOS - as there was none", then its due the fact that Win got started not on top of (the resident part) of Command.com, but instead. That's why there was no base to operate on, even if one would have manged to end Windows with returning to DOS. So with this important step missing (on the way up as well down), the base of ME was no longer even a minimum usable DOS as we knew from before.

      – Raffzahn
      11 hours ago






    • 7





      Regarding caches, it’s possible to use cache as RAM on current Intel CPUs (that’s how they boot, before they’ve trained the memory); I have an entry on my never-ending to-do list which involves looking into booting DOS inside cache, with no RAM at all...

      – Stephen Kitt
      9 hours ago






    • 2





      @StephenKitt Ooohhhhh ... Shiney .... when ever you get close to play with that, give us a note, so we can watch - or even participate.

      – Raffzahn
      9 hours ago


















    28














    No, DOS won't use any additional CPU (*1) ever.



    (Though it might run faster due them being faster)



    Quite the same way as DOS doesn't take advantage of the extended memory or additional instructions.



    DOS is a




    • Single CPU

    • Single User

    • Single Task

    • Single Program

    • Real Mode

    • 8086


    operating system.



    Even through it got a few extensions over time to tap a bit into newer developments, like




    • A20 Handler for HMA usage

    • Utilities for extended memory usage like HIMEM.SYS or EMM386

    • Usage of certain 286 instructions


    it never left its original setup. That's what Unix, OS/2 and Windows NT were meant for.



    Similar goes for third party extensions like background utilities (SideKick) or process swappers/switchers (DoubleDos et.al.), or the eventually furthest reaching combination of all, Windows (up to 98). They all added layers to manage DOS for some increased functionality, but didn't (and couldn't) change its basic workings.



    Of course, application programs can use additional features of a machine, no matter if it's new CPU instructions, a new graphics card, more memory, or in this case an additional CPU. Much like DOS-extenders allowed protected mode programs to run as DOS applications, most notably maybe DOS/4GW, which got huge popularity due being delivered with Watcom compilers.



    Then again, I do not know of any extender allowing the use of concurrent CPUs.



    It's worth noting that none of these additions - including the application specific DOS-extenders - were changing the basic paradigm of DOS as a single-user, single-task operating system.





    *1 - What to call it? CPU, core, hyperthread, socket? Well, in ye good old times of simple microprocessors we couldn't care less. Nowadays it gets blurry. With multiple processing units of various quality it gets hard to qualify. For most purposes the professional world continues to call the device (IC) a CPU, but avoids the term when it comes to system building and talks now about sockets, leaving open how many processing units the IC plugged in will have - or if at all. Often the term core is used instead, but thanks to Intel using this as well as a brand name, it got blurred again. So for all RC.SE purposes I stay with the term CPU, as vague as it is, and qualify if needed.






    share|improve this answer





















    • 1





      Probably digressive: I know that Windows ME reputedly provided only a very cut-down version of DOS; is that the reason that you draw the line at Windows 98 as seeking to provide an evolved DOS? This question is primarily motivated by my lack of awareness of what ME took away; I just have the vaguest sense that it was still the 9x kernel but no longer attempting to provide a workable DOS environment. So presumably whatever's there is just what it wasn't feasible to remove? Please ignore if not relevant!

      – Tommy
      13 hours ago






    • 2





      @Tommy Well, I did think about writing ME, but then ME had already, as you note, not only extended and replaced some DOS functionality for windows programs, but also disabled DOS use in itself. Prior to ME, DOS booted as usual and then Win got started as an application. With ME the DOS kernal directly started into Win without checking config.sys or autoexec.bat, so no drivers loaded, no programs executed. Just the DOS kernel and then Win. Also, Win couldn't terminate back to DOS - as there was none. Runing DOS needed a modified Win and a reboot. It's debatable, but I'll rather exclude it.

      – Raffzahn
      12 hours ago






    • 1





      @Tommy I guess I need to qualify. When saying "With ME the DOS kernal directly started into Win" and "Win couldn't terminate back to DOS - as there was none", then its due the fact that Win got started not on top of (the resident part) of Command.com, but instead. That's why there was no base to operate on, even if one would have manged to end Windows with returning to DOS. So with this important step missing (on the way up as well down), the base of ME was no longer even a minimum usable DOS as we knew from before.

      – Raffzahn
      11 hours ago






    • 7





      Regarding caches, it’s possible to use cache as RAM on current Intel CPUs (that’s how they boot, before they’ve trained the memory); I have an entry on my never-ending to-do list which involves looking into booting DOS inside cache, with no RAM at all...

      – Stephen Kitt
      9 hours ago






    • 2





      @StephenKitt Ooohhhhh ... Shiney .... when ever you get close to play with that, give us a note, so we can watch - or even participate.

      – Raffzahn
      9 hours ago
















    28












    28








    28







    No, DOS won't use any additional CPU (*1) ever.



    (Though it might run faster due them being faster)



    Quite the same way as DOS doesn't take advantage of the extended memory or additional instructions.



    DOS is a




    • Single CPU

    • Single User

    • Single Task

    • Single Program

    • Real Mode

    • 8086


    operating system.



    Even through it got a few extensions over time to tap a bit into newer developments, like




    • A20 Handler for HMA usage

    • Utilities for extended memory usage like HIMEM.SYS or EMM386

    • Usage of certain 286 instructions


    it never left its original setup. That's what Unix, OS/2 and Windows NT were meant for.



    Similar goes for third party extensions like background utilities (SideKick) or process swappers/switchers (DoubleDos et.al.), or the eventually furthest reaching combination of all, Windows (up to 98). They all added layers to manage DOS for some increased functionality, but didn't (and couldn't) change its basic workings.



    Of course, application programs can use additional features of a machine, no matter if it's new CPU instructions, a new graphics card, more memory, or in this case an additional CPU. Much like DOS-extenders allowed protected mode programs to run as DOS applications, most notably maybe DOS/4GW, which got huge popularity due being delivered with Watcom compilers.



    Then again, I do not know of any extender allowing the use of concurrent CPUs.



    It's worth noting that none of these additions - including the application specific DOS-extenders - were changing the basic paradigm of DOS as a single-user, single-task operating system.





    *1 - What to call it? CPU, core, hyperthread, socket? Well, in ye good old times of simple microprocessors we couldn't care less. Nowadays it gets blurry. With multiple processing units of various quality it gets hard to qualify. For most purposes the professional world continues to call the device (IC) a CPU, but avoids the term when it comes to system building and talks now about sockets, leaving open how many processing units the IC plugged in will have - or if at all. Often the term core is used instead, but thanks to Intel using this as well as a brand name, it got blurred again. So for all RC.SE purposes I stay with the term CPU, as vague as it is, and qualify if needed.






    share|improve this answer















    No, DOS won't use any additional CPU (*1) ever.



    (Though it might run faster due them being faster)



    Quite the same way as DOS doesn't take advantage of the extended memory or additional instructions.



    DOS is a




    • Single CPU

    • Single User

    • Single Task

    • Single Program

    • Real Mode

    • 8086


    operating system.



    Even through it got a few extensions over time to tap a bit into newer developments, like




    • A20 Handler for HMA usage

    • Utilities for extended memory usage like HIMEM.SYS or EMM386

    • Usage of certain 286 instructions


    it never left its original setup. That's what Unix, OS/2 and Windows NT were meant for.



    Similar goes for third party extensions like background utilities (SideKick) or process swappers/switchers (DoubleDos et.al.), or the eventually furthest reaching combination of all, Windows (up to 98). They all added layers to manage DOS for some increased functionality, but didn't (and couldn't) change its basic workings.



    Of course, application programs can use additional features of a machine, no matter if it's new CPU instructions, a new graphics card, more memory, or in this case an additional CPU. Much like DOS-extenders allowed protected mode programs to run as DOS applications, most notably maybe DOS/4GW, which got huge popularity due being delivered with Watcom compilers.



    Then again, I do not know of any extender allowing the use of concurrent CPUs.



    It's worth noting that none of these additions - including the application specific DOS-extenders - were changing the basic paradigm of DOS as a single-user, single-task operating system.





    *1 - What to call it? CPU, core, hyperthread, socket? Well, in ye good old times of simple microprocessors we couldn't care less. Nowadays it gets blurry. With multiple processing units of various quality it gets hard to qualify. For most purposes the professional world continues to call the device (IC) a CPU, but avoids the term when it comes to system building and talks now about sockets, leaving open how many processing units the IC plugged in will have - or if at all. Often the term core is used instead, but thanks to Intel using this as well as a brand name, it got blurred again. So for all RC.SE purposes I stay with the term CPU, as vague as it is, and qualify if needed.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited 11 hours ago

























    answered 14 hours ago









    RaffzahnRaffzahn

    49.9k6115202




    49.9k6115202








    • 1





      Probably digressive: I know that Windows ME reputedly provided only a very cut-down version of DOS; is that the reason that you draw the line at Windows 98 as seeking to provide an evolved DOS? This question is primarily motivated by my lack of awareness of what ME took away; I just have the vaguest sense that it was still the 9x kernel but no longer attempting to provide a workable DOS environment. So presumably whatever's there is just what it wasn't feasible to remove? Please ignore if not relevant!

      – Tommy
      13 hours ago






    • 2





      @Tommy Well, I did think about writing ME, but then ME had already, as you note, not only extended and replaced some DOS functionality for windows programs, but also disabled DOS use in itself. Prior to ME, DOS booted as usual and then Win got started as an application. With ME the DOS kernal directly started into Win without checking config.sys or autoexec.bat, so no drivers loaded, no programs executed. Just the DOS kernel and then Win. Also, Win couldn't terminate back to DOS - as there was none. Runing DOS needed a modified Win and a reboot. It's debatable, but I'll rather exclude it.

      – Raffzahn
      12 hours ago






    • 1





      @Tommy I guess I need to qualify. When saying "With ME the DOS kernal directly started into Win" and "Win couldn't terminate back to DOS - as there was none", then its due the fact that Win got started not on top of (the resident part) of Command.com, but instead. That's why there was no base to operate on, even if one would have manged to end Windows with returning to DOS. So with this important step missing (on the way up as well down), the base of ME was no longer even a minimum usable DOS as we knew from before.

      – Raffzahn
      11 hours ago






    • 7





      Regarding caches, it’s possible to use cache as RAM on current Intel CPUs (that’s how they boot, before they’ve trained the memory); I have an entry on my never-ending to-do list which involves looking into booting DOS inside cache, with no RAM at all...

      – Stephen Kitt
      9 hours ago






    • 2





      @StephenKitt Ooohhhhh ... Shiney .... when ever you get close to play with that, give us a note, so we can watch - or even participate.

      – Raffzahn
      9 hours ago
















    • 1





      Probably digressive: I know that Windows ME reputedly provided only a very cut-down version of DOS; is that the reason that you draw the line at Windows 98 as seeking to provide an evolved DOS? This question is primarily motivated by my lack of awareness of what ME took away; I just have the vaguest sense that it was still the 9x kernel but no longer attempting to provide a workable DOS environment. So presumably whatever's there is just what it wasn't feasible to remove? Please ignore if not relevant!

      – Tommy
      13 hours ago






    • 2





      @Tommy Well, I did think about writing ME, but then ME had already, as you note, not only extended and replaced some DOS functionality for windows programs, but also disabled DOS use in itself. Prior to ME, DOS booted as usual and then Win got started as an application. With ME the DOS kernal directly started into Win without checking config.sys or autoexec.bat, so no drivers loaded, no programs executed. Just the DOS kernel and then Win. Also, Win couldn't terminate back to DOS - as there was none. Runing DOS needed a modified Win and a reboot. It's debatable, but I'll rather exclude it.

      – Raffzahn
      12 hours ago






    • 1





      @Tommy I guess I need to qualify. When saying "With ME the DOS kernal directly started into Win" and "Win couldn't terminate back to DOS - as there was none", then its due the fact that Win got started not on top of (the resident part) of Command.com, but instead. That's why there was no base to operate on, even if one would have manged to end Windows with returning to DOS. So with this important step missing (on the way up as well down), the base of ME was no longer even a minimum usable DOS as we knew from before.

      – Raffzahn
      11 hours ago






    • 7





      Regarding caches, it’s possible to use cache as RAM on current Intel CPUs (that’s how they boot, before they’ve trained the memory); I have an entry on my never-ending to-do list which involves looking into booting DOS inside cache, with no RAM at all...

      – Stephen Kitt
      9 hours ago






    • 2





      @StephenKitt Ooohhhhh ... Shiney .... when ever you get close to play with that, give us a note, so we can watch - or even participate.

      – Raffzahn
      9 hours ago










    1




    1





    Probably digressive: I know that Windows ME reputedly provided only a very cut-down version of DOS; is that the reason that you draw the line at Windows 98 as seeking to provide an evolved DOS? This question is primarily motivated by my lack of awareness of what ME took away; I just have the vaguest sense that it was still the 9x kernel but no longer attempting to provide a workable DOS environment. So presumably whatever's there is just what it wasn't feasible to remove? Please ignore if not relevant!

    – Tommy
    13 hours ago





    Probably digressive: I know that Windows ME reputedly provided only a very cut-down version of DOS; is that the reason that you draw the line at Windows 98 as seeking to provide an evolved DOS? This question is primarily motivated by my lack of awareness of what ME took away; I just have the vaguest sense that it was still the 9x kernel but no longer attempting to provide a workable DOS environment. So presumably whatever's there is just what it wasn't feasible to remove? Please ignore if not relevant!

    – Tommy
    13 hours ago




    2




    2





    @Tommy Well, I did think about writing ME, but then ME had already, as you note, not only extended and replaced some DOS functionality for windows programs, but also disabled DOS use in itself. Prior to ME, DOS booted as usual and then Win got started as an application. With ME the DOS kernal directly started into Win without checking config.sys or autoexec.bat, so no drivers loaded, no programs executed. Just the DOS kernel and then Win. Also, Win couldn't terminate back to DOS - as there was none. Runing DOS needed a modified Win and a reboot. It's debatable, but I'll rather exclude it.

    – Raffzahn
    12 hours ago





    @Tommy Well, I did think about writing ME, but then ME had already, as you note, not only extended and replaced some DOS functionality for windows programs, but also disabled DOS use in itself. Prior to ME, DOS booted as usual and then Win got started as an application. With ME the DOS kernal directly started into Win without checking config.sys or autoexec.bat, so no drivers loaded, no programs executed. Just the DOS kernel and then Win. Also, Win couldn't terminate back to DOS - as there was none. Runing DOS needed a modified Win and a reboot. It's debatable, but I'll rather exclude it.

    – Raffzahn
    12 hours ago




    1




    1





    @Tommy I guess I need to qualify. When saying "With ME the DOS kernal directly started into Win" and "Win couldn't terminate back to DOS - as there was none", then its due the fact that Win got started not on top of (the resident part) of Command.com, but instead. That's why there was no base to operate on, even if one would have manged to end Windows with returning to DOS. So with this important step missing (on the way up as well down), the base of ME was no longer even a minimum usable DOS as we knew from before.

    – Raffzahn
    11 hours ago





    @Tommy I guess I need to qualify. When saying "With ME the DOS kernal directly started into Win" and "Win couldn't terminate back to DOS - as there was none", then its due the fact that Win got started not on top of (the resident part) of Command.com, but instead. That's why there was no base to operate on, even if one would have manged to end Windows with returning to DOS. So with this important step missing (on the way up as well down), the base of ME was no longer even a minimum usable DOS as we knew from before.

    – Raffzahn
    11 hours ago




    7




    7





    Regarding caches, it’s possible to use cache as RAM on current Intel CPUs (that’s how they boot, before they’ve trained the memory); I have an entry on my never-ending to-do list which involves looking into booting DOS inside cache, with no RAM at all...

    – Stephen Kitt
    9 hours ago





    Regarding caches, it’s possible to use cache as RAM on current Intel CPUs (that’s how they boot, before they’ve trained the memory); I have an entry on my never-ending to-do list which involves looking into booting DOS inside cache, with no RAM at all...

    – Stephen Kitt
    9 hours ago




    2




    2





    @StephenKitt Ooohhhhh ... Shiney .... when ever you get close to play with that, give us a note, so we can watch - or even participate.

    – Raffzahn
    9 hours ago







    @StephenKitt Ooohhhhh ... Shiney .... when ever you get close to play with that, give us a note, so we can watch - or even participate.

    – Raffzahn
    9 hours ago













    10














    If by IBM DOS you mean IBM PC DOS, which was a rebrand/derivate of MS-DOS, then the answer is no - DOS will only ever support a single core. HyperThreading and multiple cores is simply not supported by DOS.



    Making DOS use multiple cores would be a major operation. Firstly DOS would have to support multitasking. It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking. Then it would need to be developed to support SMP (symmetric multiprocessing). Both of these tasks would be major undertakings - their peers in the 90s took years to get reliable and efficient multiprocessing support.



    It is possible that a single application could take advantage of multiple cores if it used DOS as a bootstrap and then kicked DOS out, and essentially became its own operating system.



    If you're looking for a 90s-era operating system that supports multiple cores, then you would be looking at Linux (2.0 or later), Windows NT 4 or OS/2 Warp.






    share|improve this answer



















    • 1





      AFAIK there were only two versions of OS/2 with support for multiple CPUs, OS/2 2.11 SMP and OS/2 Warp 4 AS SMP, and both are very rare.

      – Stephen Kitt
      14 hours ago











    • "It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking" <- why? shouldn't it be enough to give each "core" something to do and wait until this something says "hey, I finished"?

      – Felix Palmen
      13 hours ago






    • 1





      Please feel free to edit my post - I assumed pre-emptive multitasking as a given, but that assumption may not stand up to scrutiny, or at least may be disputed :-)

      – Richard Downer
      10 hours ago






    • 1





      @RichardDowner no, I agree with you. The question is about DOS. It is about running existing DOS software. Existing DOS software very often assumes exclusive ownership of hardware. To run multiple applications at once without each experiencing a whole bunch of surprises, you need to be able to intercede upon hardware accesses. That sort of intercession is called pre-empting. Saying "oh, but you could have offered this service, and then brand new software could have been written to use it" is completely orthogonal to the question posed.

      – Tommy
      10 hours ago






    • 2





      @FelixPalmen nobody is arguing that "preemption is a must for SMP". They're arguing that preemption is a must for finding a way to do anything useful with multiple cores within the confines of running MS-DOS software.

      – Tommy
      10 hours ago
















    10














    If by IBM DOS you mean IBM PC DOS, which was a rebrand/derivate of MS-DOS, then the answer is no - DOS will only ever support a single core. HyperThreading and multiple cores is simply not supported by DOS.



    Making DOS use multiple cores would be a major operation. Firstly DOS would have to support multitasking. It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking. Then it would need to be developed to support SMP (symmetric multiprocessing). Both of these tasks would be major undertakings - their peers in the 90s took years to get reliable and efficient multiprocessing support.



    It is possible that a single application could take advantage of multiple cores if it used DOS as a bootstrap and then kicked DOS out, and essentially became its own operating system.



    If you're looking for a 90s-era operating system that supports multiple cores, then you would be looking at Linux (2.0 or later), Windows NT 4 or OS/2 Warp.






    share|improve this answer



















    • 1





      AFAIK there were only two versions of OS/2 with support for multiple CPUs, OS/2 2.11 SMP and OS/2 Warp 4 AS SMP, and both are very rare.

      – Stephen Kitt
      14 hours ago











    • "It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking" <- why? shouldn't it be enough to give each "core" something to do and wait until this something says "hey, I finished"?

      – Felix Palmen
      13 hours ago






    • 1





      Please feel free to edit my post - I assumed pre-emptive multitasking as a given, but that assumption may not stand up to scrutiny, or at least may be disputed :-)

      – Richard Downer
      10 hours ago






    • 1





      @RichardDowner no, I agree with you. The question is about DOS. It is about running existing DOS software. Existing DOS software very often assumes exclusive ownership of hardware. To run multiple applications at once without each experiencing a whole bunch of surprises, you need to be able to intercede upon hardware accesses. That sort of intercession is called pre-empting. Saying "oh, but you could have offered this service, and then brand new software could have been written to use it" is completely orthogonal to the question posed.

      – Tommy
      10 hours ago






    • 2





      @FelixPalmen nobody is arguing that "preemption is a must for SMP". They're arguing that preemption is a must for finding a way to do anything useful with multiple cores within the confines of running MS-DOS software.

      – Tommy
      10 hours ago














    10












    10








    10







    If by IBM DOS you mean IBM PC DOS, which was a rebrand/derivate of MS-DOS, then the answer is no - DOS will only ever support a single core. HyperThreading and multiple cores is simply not supported by DOS.



    Making DOS use multiple cores would be a major operation. Firstly DOS would have to support multitasking. It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking. Then it would need to be developed to support SMP (symmetric multiprocessing). Both of these tasks would be major undertakings - their peers in the 90s took years to get reliable and efficient multiprocessing support.



    It is possible that a single application could take advantage of multiple cores if it used DOS as a bootstrap and then kicked DOS out, and essentially became its own operating system.



    If you're looking for a 90s-era operating system that supports multiple cores, then you would be looking at Linux (2.0 or later), Windows NT 4 or OS/2 Warp.






    share|improve this answer













    If by IBM DOS you mean IBM PC DOS, which was a rebrand/derivate of MS-DOS, then the answer is no - DOS will only ever support a single core. HyperThreading and multiple cores is simply not supported by DOS.



    Making DOS use multiple cores would be a major operation. Firstly DOS would have to support multitasking. It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking. Then it would need to be developed to support SMP (symmetric multiprocessing). Both of these tasks would be major undertakings - their peers in the 90s took years to get reliable and efficient multiprocessing support.



    It is possible that a single application could take advantage of multiple cores if it used DOS as a bootstrap and then kicked DOS out, and essentially became its own operating system.



    If you're looking for a 90s-era operating system that supports multiple cores, then you would be looking at Linux (2.0 or later), Windows NT 4 or OS/2 Warp.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered 14 hours ago









    Richard DownerRichard Downer

    2,260634




    2,260634








    • 1





      AFAIK there were only two versions of OS/2 with support for multiple CPUs, OS/2 2.11 SMP and OS/2 Warp 4 AS SMP, and both are very rare.

      – Stephen Kitt
      14 hours ago











    • "It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking" <- why? shouldn't it be enough to give each "core" something to do and wait until this something says "hey, I finished"?

      – Felix Palmen
      13 hours ago






    • 1





      Please feel free to edit my post - I assumed pre-emptive multitasking as a given, but that assumption may not stand up to scrutiny, or at least may be disputed :-)

      – Richard Downer
      10 hours ago






    • 1





      @RichardDowner no, I agree with you. The question is about DOS. It is about running existing DOS software. Existing DOS software very often assumes exclusive ownership of hardware. To run multiple applications at once without each experiencing a whole bunch of surprises, you need to be able to intercede upon hardware accesses. That sort of intercession is called pre-empting. Saying "oh, but you could have offered this service, and then brand new software could have been written to use it" is completely orthogonal to the question posed.

      – Tommy
      10 hours ago






    • 2





      @FelixPalmen nobody is arguing that "preemption is a must for SMP". They're arguing that preemption is a must for finding a way to do anything useful with multiple cores within the confines of running MS-DOS software.

      – Tommy
      10 hours ago














    • 1





      AFAIK there were only two versions of OS/2 with support for multiple CPUs, OS/2 2.11 SMP and OS/2 Warp 4 AS SMP, and both are very rare.

      – Stephen Kitt
      14 hours ago











    • "It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking" <- why? shouldn't it be enough to give each "core" something to do and wait until this something says "hey, I finished"?

      – Felix Palmen
      13 hours ago






    • 1





      Please feel free to edit my post - I assumed pre-emptive multitasking as a given, but that assumption may not stand up to scrutiny, or at least may be disputed :-)

      – Richard Downer
      10 hours ago






    • 1





      @RichardDowner no, I agree with you. The question is about DOS. It is about running existing DOS software. Existing DOS software very often assumes exclusive ownership of hardware. To run multiple applications at once without each experiencing a whole bunch of surprises, you need to be able to intercede upon hardware accesses. That sort of intercession is called pre-empting. Saying "oh, but you could have offered this service, and then brand new software could have been written to use it" is completely orthogonal to the question posed.

      – Tommy
      10 hours ago






    • 2





      @FelixPalmen nobody is arguing that "preemption is a must for SMP". They're arguing that preemption is a must for finding a way to do anything useful with multiple cores within the confines of running MS-DOS software.

      – Tommy
      10 hours ago








    1




    1





    AFAIK there were only two versions of OS/2 with support for multiple CPUs, OS/2 2.11 SMP and OS/2 Warp 4 AS SMP, and both are very rare.

    – Stephen Kitt
    14 hours ago





    AFAIK there were only two versions of OS/2 with support for multiple CPUs, OS/2 2.11 SMP and OS/2 Warp 4 AS SMP, and both are very rare.

    – Stephen Kitt
    14 hours ago













    "It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking" <- why? shouldn't it be enough to give each "core" something to do and wait until this something says "hey, I finished"?

    – Felix Palmen
    13 hours ago





    "It could not be task switching or cooperative multitasking, it would have to be full pre-emptive multitasking" <- why? shouldn't it be enough to give each "core" something to do and wait until this something says "hey, I finished"?

    – Felix Palmen
    13 hours ago




    1




    1





    Please feel free to edit my post - I assumed pre-emptive multitasking as a given, but that assumption may not stand up to scrutiny, or at least may be disputed :-)

    – Richard Downer
    10 hours ago





    Please feel free to edit my post - I assumed pre-emptive multitasking as a given, but that assumption may not stand up to scrutiny, or at least may be disputed :-)

    – Richard Downer
    10 hours ago




    1




    1





    @RichardDowner no, I agree with you. The question is about DOS. It is about running existing DOS software. Existing DOS software very often assumes exclusive ownership of hardware. To run multiple applications at once without each experiencing a whole bunch of surprises, you need to be able to intercede upon hardware accesses. That sort of intercession is called pre-empting. Saying "oh, but you could have offered this service, and then brand new software could have been written to use it" is completely orthogonal to the question posed.

    – Tommy
    10 hours ago





    @RichardDowner no, I agree with you. The question is about DOS. It is about running existing DOS software. Existing DOS software very often assumes exclusive ownership of hardware. To run multiple applications at once without each experiencing a whole bunch of surprises, you need to be able to intercede upon hardware accesses. That sort of intercession is called pre-empting. Saying "oh, but you could have offered this service, and then brand new software could have been written to use it" is completely orthogonal to the question posed.

    – Tommy
    10 hours ago




    2




    2





    @FelixPalmen nobody is arguing that "preemption is a must for SMP". They're arguing that preemption is a must for finding a way to do anything useful with multiple cores within the confines of running MS-DOS software.

    – Tommy
    10 hours ago





    @FelixPalmen nobody is arguing that "preemption is a must for SMP". They're arguing that preemption is a must for finding a way to do anything useful with multiple cores within the confines of running MS-DOS software.

    – Tommy
    10 hours ago











    1














    The answers above are correct about the core usage, but it would still be faster than old machines because the clock (MHz/GHz) of the CPU is faster. This is actually problematic on many of the old games because things happen faster than you can react.



    If you wanted to test/play without committing to a wipe/load scenario, you could always try FreeDOS on an emulator or on an old hard drive.






    share|improve this answer








    New contributor




    UnhandledExcepSean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.
















    • 4





      Actually, the canonical example of speed causing problems wasn't, IMHO, games, but rather Turbo Pascal. There was a bug in certain versions of Turbo Pascal where executables had a timing loop at the beginning and on machines of a certain speed (I think I first saw this with Pentium, but I'm not 100% sure), they would get a runtime error due to, I think, an overflow of an integer variable. Fortunately, someone came up with a patch (I'm sure I still have it here somewhere) that bypassed that code - I would run that as part of the compile process.

      – manassehkatz
      12 hours ago








    • 1





      @manassehkatz The first time I experienced "too fast" was Tetris that originally I had for a 286. When I installed it on a 386, I had to turn off the turbo button for it to be remotely playable. By my 486, it was unplayable :P

      – UnhandledExcepSean
      12 hours ago













    • Games should, inherently, understand the speed issue and deal with it - even though many did not. Ordinary applications shouldn't have to. One of the complications is that IBM (or maybe it was Microsoft, not sure whose idea it was) came up with the 18.2 ticks per second - when plenty of other systems already used 60 ticks per second. That is fine for typical application timing purposes but way too slow for interactive games - so every game came with its own method of timing and many of those methods really failed past a certain CPU speed (or simply didn't adapt at all and assumed 4.77 Mhz).

      – manassehkatz
      12 hours ago








    • 2





      For the record, I had to use the Turbo Pascal patch program in order to run Turbo Pascal software on a Pentium 200 (non-MMX) if memory serves. But only in real DOS, not when running inside Windows 95. So the speed of execution that causes its startup code to attempt a divide by zero must be somewhere close below that of an unencumbered P200.

      – Tommy
      11 hours ago






    • 1





      @manassehkatz, 18.2 Hz was 1/65536th of 1/12th of 4 times the 4.77 MHz NTSC color subcarrier frequency. Fortunately, the timer could be adjusted to produce other frequencies.

      – Mark
      6 hours ago


















    1














    The answers above are correct about the core usage, but it would still be faster than old machines because the clock (MHz/GHz) of the CPU is faster. This is actually problematic on many of the old games because things happen faster than you can react.



    If you wanted to test/play without committing to a wipe/load scenario, you could always try FreeDOS on an emulator or on an old hard drive.






    share|improve this answer








    New contributor




    UnhandledExcepSean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.
















    • 4





      Actually, the canonical example of speed causing problems wasn't, IMHO, games, but rather Turbo Pascal. There was a bug in certain versions of Turbo Pascal where executables had a timing loop at the beginning and on machines of a certain speed (I think I first saw this with Pentium, but I'm not 100% sure), they would get a runtime error due to, I think, an overflow of an integer variable. Fortunately, someone came up with a patch (I'm sure I still have it here somewhere) that bypassed that code - I would run that as part of the compile process.

      – manassehkatz
      12 hours ago








    • 1





      @manassehkatz The first time I experienced "too fast" was Tetris that originally I had for a 286. When I installed it on a 386, I had to turn off the turbo button for it to be remotely playable. By my 486, it was unplayable :P

      – UnhandledExcepSean
      12 hours ago













    • Games should, inherently, understand the speed issue and deal with it - even though many did not. Ordinary applications shouldn't have to. One of the complications is that IBM (or maybe it was Microsoft, not sure whose idea it was) came up with the 18.2 ticks per second - when plenty of other systems already used 60 ticks per second. That is fine for typical application timing purposes but way too slow for interactive games - so every game came with its own method of timing and many of those methods really failed past a certain CPU speed (or simply didn't adapt at all and assumed 4.77 Mhz).

      – manassehkatz
      12 hours ago








    • 2





      For the record, I had to use the Turbo Pascal patch program in order to run Turbo Pascal software on a Pentium 200 (non-MMX) if memory serves. But only in real DOS, not when running inside Windows 95. So the speed of execution that causes its startup code to attempt a divide by zero must be somewhere close below that of an unencumbered P200.

      – Tommy
      11 hours ago






    • 1





      @manassehkatz, 18.2 Hz was 1/65536th of 1/12th of 4 times the 4.77 MHz NTSC color subcarrier frequency. Fortunately, the timer could be adjusted to produce other frequencies.

      – Mark
      6 hours ago
















    1












    1








    1







    The answers above are correct about the core usage, but it would still be faster than old machines because the clock (MHz/GHz) of the CPU is faster. This is actually problematic on many of the old games because things happen faster than you can react.



    If you wanted to test/play without committing to a wipe/load scenario, you could always try FreeDOS on an emulator or on an old hard drive.






    share|improve this answer








    New contributor




    UnhandledExcepSean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.










    The answers above are correct about the core usage, but it would still be faster than old machines because the clock (MHz/GHz) of the CPU is faster. This is actually problematic on many of the old games because things happen faster than you can react.



    If you wanted to test/play without committing to a wipe/load scenario, you could always try FreeDOS on an emulator or on an old hard drive.







    share|improve this answer








    New contributor




    UnhandledExcepSean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.









    share|improve this answer



    share|improve this answer






    New contributor




    UnhandledExcepSean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.









    answered 12 hours ago









    UnhandledExcepSeanUnhandledExcepSean

    1111




    1111




    New contributor




    UnhandledExcepSean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.





    New contributor





    UnhandledExcepSean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






    UnhandledExcepSean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.








    • 4





      Actually, the canonical example of speed causing problems wasn't, IMHO, games, but rather Turbo Pascal. There was a bug in certain versions of Turbo Pascal where executables had a timing loop at the beginning and on machines of a certain speed (I think I first saw this with Pentium, but I'm not 100% sure), they would get a runtime error due to, I think, an overflow of an integer variable. Fortunately, someone came up with a patch (I'm sure I still have it here somewhere) that bypassed that code - I would run that as part of the compile process.

      – manassehkatz
      12 hours ago








    • 1





      @manassehkatz The first time I experienced "too fast" was Tetris that originally I had for a 286. When I installed it on a 386, I had to turn off the turbo button for it to be remotely playable. By my 486, it was unplayable :P

      – UnhandledExcepSean
      12 hours ago













    • Games should, inherently, understand the speed issue and deal with it - even though many did not. Ordinary applications shouldn't have to. One of the complications is that IBM (or maybe it was Microsoft, not sure whose idea it was) came up with the 18.2 ticks per second - when plenty of other systems already used 60 ticks per second. That is fine for typical application timing purposes but way too slow for interactive games - so every game came with its own method of timing and many of those methods really failed past a certain CPU speed (or simply didn't adapt at all and assumed 4.77 Mhz).

      – manassehkatz
      12 hours ago








    • 2





      For the record, I had to use the Turbo Pascal patch program in order to run Turbo Pascal software on a Pentium 200 (non-MMX) if memory serves. But only in real DOS, not when running inside Windows 95. So the speed of execution that causes its startup code to attempt a divide by zero must be somewhere close below that of an unencumbered P200.

      – Tommy
      11 hours ago






    • 1





      @manassehkatz, 18.2 Hz was 1/65536th of 1/12th of 4 times the 4.77 MHz NTSC color subcarrier frequency. Fortunately, the timer could be adjusted to produce other frequencies.

      – Mark
      6 hours ago
















    • 4





      Actually, the canonical example of speed causing problems wasn't, IMHO, games, but rather Turbo Pascal. There was a bug in certain versions of Turbo Pascal where executables had a timing loop at the beginning and on machines of a certain speed (I think I first saw this with Pentium, but I'm not 100% sure), they would get a runtime error due to, I think, an overflow of an integer variable. Fortunately, someone came up with a patch (I'm sure I still have it here somewhere) that bypassed that code - I would run that as part of the compile process.

      – manassehkatz
      12 hours ago








    • 1





      @manassehkatz The first time I experienced "too fast" was Tetris that originally I had for a 286. When I installed it on a 386, I had to turn off the turbo button for it to be remotely playable. By my 486, it was unplayable :P

      – UnhandledExcepSean
      12 hours ago













    • Games should, inherently, understand the speed issue and deal with it - even though many did not. Ordinary applications shouldn't have to. One of the complications is that IBM (or maybe it was Microsoft, not sure whose idea it was) came up with the 18.2 ticks per second - when plenty of other systems already used 60 ticks per second. That is fine for typical application timing purposes but way too slow for interactive games - so every game came with its own method of timing and many of those methods really failed past a certain CPU speed (or simply didn't adapt at all and assumed 4.77 Mhz).

      – manassehkatz
      12 hours ago








    • 2





      For the record, I had to use the Turbo Pascal patch program in order to run Turbo Pascal software on a Pentium 200 (non-MMX) if memory serves. But only in real DOS, not when running inside Windows 95. So the speed of execution that causes its startup code to attempt a divide by zero must be somewhere close below that of an unencumbered P200.

      – Tommy
      11 hours ago






    • 1





      @manassehkatz, 18.2 Hz was 1/65536th of 1/12th of 4 times the 4.77 MHz NTSC color subcarrier frequency. Fortunately, the timer could be adjusted to produce other frequencies.

      – Mark
      6 hours ago










    4




    4





    Actually, the canonical example of speed causing problems wasn't, IMHO, games, but rather Turbo Pascal. There was a bug in certain versions of Turbo Pascal where executables had a timing loop at the beginning and on machines of a certain speed (I think I first saw this with Pentium, but I'm not 100% sure), they would get a runtime error due to, I think, an overflow of an integer variable. Fortunately, someone came up with a patch (I'm sure I still have it here somewhere) that bypassed that code - I would run that as part of the compile process.

    – manassehkatz
    12 hours ago







    Actually, the canonical example of speed causing problems wasn't, IMHO, games, but rather Turbo Pascal. There was a bug in certain versions of Turbo Pascal where executables had a timing loop at the beginning and on machines of a certain speed (I think I first saw this with Pentium, but I'm not 100% sure), they would get a runtime error due to, I think, an overflow of an integer variable. Fortunately, someone came up with a patch (I'm sure I still have it here somewhere) that bypassed that code - I would run that as part of the compile process.

    – manassehkatz
    12 hours ago






    1




    1





    @manassehkatz The first time I experienced "too fast" was Tetris that originally I had for a 286. When I installed it on a 386, I had to turn off the turbo button for it to be remotely playable. By my 486, it was unplayable :P

    – UnhandledExcepSean
    12 hours ago







    @manassehkatz The first time I experienced "too fast" was Tetris that originally I had for a 286. When I installed it on a 386, I had to turn off the turbo button for it to be remotely playable. By my 486, it was unplayable :P

    – UnhandledExcepSean
    12 hours ago















    Games should, inherently, understand the speed issue and deal with it - even though many did not. Ordinary applications shouldn't have to. One of the complications is that IBM (or maybe it was Microsoft, not sure whose idea it was) came up with the 18.2 ticks per second - when plenty of other systems already used 60 ticks per second. That is fine for typical application timing purposes but way too slow for interactive games - so every game came with its own method of timing and many of those methods really failed past a certain CPU speed (or simply didn't adapt at all and assumed 4.77 Mhz).

    – manassehkatz
    12 hours ago







    Games should, inherently, understand the speed issue and deal with it - even though many did not. Ordinary applications shouldn't have to. One of the complications is that IBM (or maybe it was Microsoft, not sure whose idea it was) came up with the 18.2 ticks per second - when plenty of other systems already used 60 ticks per second. That is fine for typical application timing purposes but way too slow for interactive games - so every game came with its own method of timing and many of those methods really failed past a certain CPU speed (or simply didn't adapt at all and assumed 4.77 Mhz).

    – manassehkatz
    12 hours ago






    2




    2





    For the record, I had to use the Turbo Pascal patch program in order to run Turbo Pascal software on a Pentium 200 (non-MMX) if memory serves. But only in real DOS, not when running inside Windows 95. So the speed of execution that causes its startup code to attempt a divide by zero must be somewhere close below that of an unencumbered P200.

    – Tommy
    11 hours ago





    For the record, I had to use the Turbo Pascal patch program in order to run Turbo Pascal software on a Pentium 200 (non-MMX) if memory serves. But only in real DOS, not when running inside Windows 95. So the speed of execution that causes its startup code to attempt a divide by zero must be somewhere close below that of an unencumbered P200.

    – Tommy
    11 hours ago




    1




    1





    @manassehkatz, 18.2 Hz was 1/65536th of 1/12th of 4 times the 4.77 MHz NTSC color subcarrier frequency. Fortunately, the timer could be adjusted to produce other frequencies.

    – Mark
    6 hours ago







    @manassehkatz, 18.2 Hz was 1/65536th of 1/12th of 4 times the 4.77 MHz NTSC color subcarrier frequency. Fortunately, the timer could be adjusted to produce other frequencies.

    – Mark
    6 hours ago













    0














    DOS itself won't do anything to boot up the "extra" cores in a multicore system, running only on the boot CPU.



    A program that does that is normally called an operating system. You could certainly have a program that takes over from DOS. Maybe even one that saves the previous DOS state and could exit back to DOS. But nobody's written such a thing. You can probably find bootloaders that will load Linux from DOS, though. It would involve taking over the interrupt table and so on.



    https://stackoverflow.com/questions/980999/what-does-multicore-assembly-language-look-like includes an answer with some details on what you'd need to do on x86 to bring up other cores to send inter-processor interrupts (IPIs) that bring up other cores.





    CPUs with fewer cores tend to have higher guaranteed clock speeds, because each core has a higher power budget. But CPUs that support "turbo" typically have a high single-core turbo clock they can use when other cores are idle.





    Modern Intel CPUs with more cores also tend to have more L3 cache; a single core can benefit from all the L3 cache on the whole die. (Not on other CPUs in a multi-socket system, though).



    If you're using a DOS extender that allows a DOS program to access more than 1MiB of RAM (e.g. running in 32-bit protected mode), then it might actually benefit from having more than the 3MiB of L3 cache that a low-end dual-core system has.



    But otherwise DOS can't use more memory than the smallest L3 caches on modern mainstream x86 CPUs. (Or on Skylake-X, the per-core L2 caches are 1MiB, up from 256kiB, so other than unacheable I/O / device / VGA memory, everything would be a cache hit with 13 cycle latency, much faster than ~45 cycle latency L3 in a dual or quad core!)



    But there's a downside to having more cores on Intel CPUs: L3 cache latency. They put a slice of L3 along with each core, and cores + L3 are connected by a ring bus. So more cores means more hops on the ring bus on average to get to the right slice. (This is reportedly even worse on Skylake-X, where a mesh connects cores. It's odd because a mesh should mean fewer hops.)



    This extra latency also affects DRAM access, so single-core memory bandwidth is better on dual / quad core desktop CPUs than on big many-core Xeons. Why is Skylake so much better than Broadwell-E for single-threaded memory throughput?. Even though the Xeon has quad-channel or 6-channel memory controllers, a single core can't saturate them and actually has worse bandwidth than a single core of the same clock speed on a quad-core part. Bandwidth is limited by max_concurrency / latency.



    (Of course this doesn't apply to L2 cache hits, and 256kiB L2 is a good fraction of the total memory that DOS programs can use without a DOS extender. And 2, 4, or 8 MiB of L3 cache is pretty huge by DOS standards.)





    AMD Ryzen is different: it uses "core clusters" which each group of 4 cores share an L3 cache. More total cores won't give you more L3 that a single core can benefit from. But within a cluster, L3 latency is fixed and pretty good.






    share|improve this answer




























      0














      DOS itself won't do anything to boot up the "extra" cores in a multicore system, running only on the boot CPU.



      A program that does that is normally called an operating system. You could certainly have a program that takes over from DOS. Maybe even one that saves the previous DOS state and could exit back to DOS. But nobody's written such a thing. You can probably find bootloaders that will load Linux from DOS, though. It would involve taking over the interrupt table and so on.



      https://stackoverflow.com/questions/980999/what-does-multicore-assembly-language-look-like includes an answer with some details on what you'd need to do on x86 to bring up other cores to send inter-processor interrupts (IPIs) that bring up other cores.





      CPUs with fewer cores tend to have higher guaranteed clock speeds, because each core has a higher power budget. But CPUs that support "turbo" typically have a high single-core turbo clock they can use when other cores are idle.





      Modern Intel CPUs with more cores also tend to have more L3 cache; a single core can benefit from all the L3 cache on the whole die. (Not on other CPUs in a multi-socket system, though).



      If you're using a DOS extender that allows a DOS program to access more than 1MiB of RAM (e.g. running in 32-bit protected mode), then it might actually benefit from having more than the 3MiB of L3 cache that a low-end dual-core system has.



      But otherwise DOS can't use more memory than the smallest L3 caches on modern mainstream x86 CPUs. (Or on Skylake-X, the per-core L2 caches are 1MiB, up from 256kiB, so other than unacheable I/O / device / VGA memory, everything would be a cache hit with 13 cycle latency, much faster than ~45 cycle latency L3 in a dual or quad core!)



      But there's a downside to having more cores on Intel CPUs: L3 cache latency. They put a slice of L3 along with each core, and cores + L3 are connected by a ring bus. So more cores means more hops on the ring bus on average to get to the right slice. (This is reportedly even worse on Skylake-X, where a mesh connects cores. It's odd because a mesh should mean fewer hops.)



      This extra latency also affects DRAM access, so single-core memory bandwidth is better on dual / quad core desktop CPUs than on big many-core Xeons. Why is Skylake so much better than Broadwell-E for single-threaded memory throughput?. Even though the Xeon has quad-channel or 6-channel memory controllers, a single core can't saturate them and actually has worse bandwidth than a single core of the same clock speed on a quad-core part. Bandwidth is limited by max_concurrency / latency.



      (Of course this doesn't apply to L2 cache hits, and 256kiB L2 is a good fraction of the total memory that DOS programs can use without a DOS extender. And 2, 4, or 8 MiB of L3 cache is pretty huge by DOS standards.)





      AMD Ryzen is different: it uses "core clusters" which each group of 4 cores share an L3 cache. More total cores won't give you more L3 that a single core can benefit from. But within a cluster, L3 latency is fixed and pretty good.






      share|improve this answer


























        0












        0








        0







        DOS itself won't do anything to boot up the "extra" cores in a multicore system, running only on the boot CPU.



        A program that does that is normally called an operating system. You could certainly have a program that takes over from DOS. Maybe even one that saves the previous DOS state and could exit back to DOS. But nobody's written such a thing. You can probably find bootloaders that will load Linux from DOS, though. It would involve taking over the interrupt table and so on.



        https://stackoverflow.com/questions/980999/what-does-multicore-assembly-language-look-like includes an answer with some details on what you'd need to do on x86 to bring up other cores to send inter-processor interrupts (IPIs) that bring up other cores.





        CPUs with fewer cores tend to have higher guaranteed clock speeds, because each core has a higher power budget. But CPUs that support "turbo" typically have a high single-core turbo clock they can use when other cores are idle.





        Modern Intel CPUs with more cores also tend to have more L3 cache; a single core can benefit from all the L3 cache on the whole die. (Not on other CPUs in a multi-socket system, though).



        If you're using a DOS extender that allows a DOS program to access more than 1MiB of RAM (e.g. running in 32-bit protected mode), then it might actually benefit from having more than the 3MiB of L3 cache that a low-end dual-core system has.



        But otherwise DOS can't use more memory than the smallest L3 caches on modern mainstream x86 CPUs. (Or on Skylake-X, the per-core L2 caches are 1MiB, up from 256kiB, so other than unacheable I/O / device / VGA memory, everything would be a cache hit with 13 cycle latency, much faster than ~45 cycle latency L3 in a dual or quad core!)



        But there's a downside to having more cores on Intel CPUs: L3 cache latency. They put a slice of L3 along with each core, and cores + L3 are connected by a ring bus. So more cores means more hops on the ring bus on average to get to the right slice. (This is reportedly even worse on Skylake-X, where a mesh connects cores. It's odd because a mesh should mean fewer hops.)



        This extra latency also affects DRAM access, so single-core memory bandwidth is better on dual / quad core desktop CPUs than on big many-core Xeons. Why is Skylake so much better than Broadwell-E for single-threaded memory throughput?. Even though the Xeon has quad-channel or 6-channel memory controllers, a single core can't saturate them and actually has worse bandwidth than a single core of the same clock speed on a quad-core part. Bandwidth is limited by max_concurrency / latency.



        (Of course this doesn't apply to L2 cache hits, and 256kiB L2 is a good fraction of the total memory that DOS programs can use without a DOS extender. And 2, 4, or 8 MiB of L3 cache is pretty huge by DOS standards.)





        AMD Ryzen is different: it uses "core clusters" which each group of 4 cores share an L3 cache. More total cores won't give you more L3 that a single core can benefit from. But within a cluster, L3 latency is fixed and pretty good.






        share|improve this answer













        DOS itself won't do anything to boot up the "extra" cores in a multicore system, running only on the boot CPU.



        A program that does that is normally called an operating system. You could certainly have a program that takes over from DOS. Maybe even one that saves the previous DOS state and could exit back to DOS. But nobody's written such a thing. You can probably find bootloaders that will load Linux from DOS, though. It would involve taking over the interrupt table and so on.



        https://stackoverflow.com/questions/980999/what-does-multicore-assembly-language-look-like includes an answer with some details on what you'd need to do on x86 to bring up other cores to send inter-processor interrupts (IPIs) that bring up other cores.





        CPUs with fewer cores tend to have higher guaranteed clock speeds, because each core has a higher power budget. But CPUs that support "turbo" typically have a high single-core turbo clock they can use when other cores are idle.





        Modern Intel CPUs with more cores also tend to have more L3 cache; a single core can benefit from all the L3 cache on the whole die. (Not on other CPUs in a multi-socket system, though).



        If you're using a DOS extender that allows a DOS program to access more than 1MiB of RAM (e.g. running in 32-bit protected mode), then it might actually benefit from having more than the 3MiB of L3 cache that a low-end dual-core system has.



        But otherwise DOS can't use more memory than the smallest L3 caches on modern mainstream x86 CPUs. (Or on Skylake-X, the per-core L2 caches are 1MiB, up from 256kiB, so other than unacheable I/O / device / VGA memory, everything would be a cache hit with 13 cycle latency, much faster than ~45 cycle latency L3 in a dual or quad core!)



        But there's a downside to having more cores on Intel CPUs: L3 cache latency. They put a slice of L3 along with each core, and cores + L3 are connected by a ring bus. So more cores means more hops on the ring bus on average to get to the right slice. (This is reportedly even worse on Skylake-X, where a mesh connects cores. It's odd because a mesh should mean fewer hops.)



        This extra latency also affects DRAM access, so single-core memory bandwidth is better on dual / quad core desktop CPUs than on big many-core Xeons. Why is Skylake so much better than Broadwell-E for single-threaded memory throughput?. Even though the Xeon has quad-channel or 6-channel memory controllers, a single core can't saturate them and actually has worse bandwidth than a single core of the same clock speed on a quad-core part. Bandwidth is limited by max_concurrency / latency.



        (Of course this doesn't apply to L2 cache hits, and 256kiB L2 is a good fraction of the total memory that DOS programs can use without a DOS extender. And 2, 4, or 8 MiB of L3 cache is pretty huge by DOS standards.)





        AMD Ryzen is different: it uses "core clusters" which each group of 4 cores share an L3 cache. More total cores won't give you more L3 that a single core can benefit from. But within a cluster, L3 latency is fixed and pretty good.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 3 hours ago









        Peter CordesPeter Cordes

        952510




        952510






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Retrocomputing Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9082%2fwill-pc-dos-run-faster-on-4-or-8-core-modern-machines%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            How to label and detect the document text images

            Vallis Paradisi

            Tabula Rosettana