What was the first microprocessor to support full virtualization?
Virtual memory, which allows an operating system to run several machine code programs isolated from each other, came to the desktop during the eighties. But full virtualization, which lets the hypervisor run several operating systems isolated from each other on the same machine, as I understand it came quite a bit later, e.g. the 386 for some reason was not able to run multiple virtual 386s. (Short of doing software emulation, which has a hefty performance penalty.)
What was the first microprocessor that could virtualize itself to support hypervisors?
(The intent of specifying a microprocessor is to omit systems like the IBM 360, which usually turn out to have done these things a decade or two earlier. I would actually also be interested in which mainframes supported full virtualization, but it's not the focus of this question.)
history hardware microprocessor cpu virtual-memory
add a comment |
Virtual memory, which allows an operating system to run several machine code programs isolated from each other, came to the desktop during the eighties. But full virtualization, which lets the hypervisor run several operating systems isolated from each other on the same machine, as I understand it came quite a bit later, e.g. the 386 for some reason was not able to run multiple virtual 386s. (Short of doing software emulation, which has a hefty performance penalty.)
What was the first microprocessor that could virtualize itself to support hypervisors?
(The intent of specifying a microprocessor is to omit systems like the IBM 360, which usually turn out to have done these things a decade or two earlier. I would actually also be interested in which mainframes supported full virtualization, but it's not the focus of this question.)
history hardware microprocessor cpu virtual-memory
3
"The intent of specifying a microprocessor is to omit systems like the IBM 360, which usually turn out to have done these things a decade or two earlier." – You're off by 2-3 decades ;-) Virtualization was supported by the addition of the Dynamic Address Translation Unit in the S/360-67 in 1967.
– Jörg W Mittag
9 hours ago
3
@JörgWMittag I see nothing wrong with the statement in the question. The 80386 came out in 1985, 18 years after the IBM 360, which was not quite two decades earlier.
– Dan Neely
8 hours ago
add a comment |
Virtual memory, which allows an operating system to run several machine code programs isolated from each other, came to the desktop during the eighties. But full virtualization, which lets the hypervisor run several operating systems isolated from each other on the same machine, as I understand it came quite a bit later, e.g. the 386 for some reason was not able to run multiple virtual 386s. (Short of doing software emulation, which has a hefty performance penalty.)
What was the first microprocessor that could virtualize itself to support hypervisors?
(The intent of specifying a microprocessor is to omit systems like the IBM 360, which usually turn out to have done these things a decade or two earlier. I would actually also be interested in which mainframes supported full virtualization, but it's not the focus of this question.)
history hardware microprocessor cpu virtual-memory
Virtual memory, which allows an operating system to run several machine code programs isolated from each other, came to the desktop during the eighties. But full virtualization, which lets the hypervisor run several operating systems isolated from each other on the same machine, as I understand it came quite a bit later, e.g. the 386 for some reason was not able to run multiple virtual 386s. (Short of doing software emulation, which has a hefty performance penalty.)
What was the first microprocessor that could virtualize itself to support hypervisors?
(The intent of specifying a microprocessor is to omit systems like the IBM 360, which usually turn out to have done these things a decade or two earlier. I would actually also be interested in which mainframes supported full virtualization, but it's not the focus of this question.)
history hardware microprocessor cpu virtual-memory
history hardware microprocessor cpu virtual-memory
asked 13 hours ago
rwallacerwallace
9,006445130
9,006445130
3
"The intent of specifying a microprocessor is to omit systems like the IBM 360, which usually turn out to have done these things a decade or two earlier." – You're off by 2-3 decades ;-) Virtualization was supported by the addition of the Dynamic Address Translation Unit in the S/360-67 in 1967.
– Jörg W Mittag
9 hours ago
3
@JörgWMittag I see nothing wrong with the statement in the question. The 80386 came out in 1985, 18 years after the IBM 360, which was not quite two decades earlier.
– Dan Neely
8 hours ago
add a comment |
3
"The intent of specifying a microprocessor is to omit systems like the IBM 360, which usually turn out to have done these things a decade or two earlier." – You're off by 2-3 decades ;-) Virtualization was supported by the addition of the Dynamic Address Translation Unit in the S/360-67 in 1967.
– Jörg W Mittag
9 hours ago
3
@JörgWMittag I see nothing wrong with the statement in the question. The 80386 came out in 1985, 18 years after the IBM 360, which was not quite two decades earlier.
– Dan Neely
8 hours ago
3
3
"The intent of specifying a microprocessor is to omit systems like the IBM 360, which usually turn out to have done these things a decade or two earlier." – You're off by 2-3 decades ;-) Virtualization was supported by the addition of the Dynamic Address Translation Unit in the S/360-67 in 1967.
– Jörg W Mittag
9 hours ago
"The intent of specifying a microprocessor is to omit systems like the IBM 360, which usually turn out to have done these things a decade or two earlier." – You're off by 2-3 decades ;-) Virtualization was supported by the addition of the Dynamic Address Translation Unit in the S/360-67 in 1967.
– Jörg W Mittag
9 hours ago
3
3
@JörgWMittag I see nothing wrong with the statement in the question. The 80386 came out in 1985, 18 years after the IBM 360, which was not quite two decades earlier.
– Dan Neely
8 hours ago
@JörgWMittag I see nothing wrong with the statement in the question. The 80386 came out in 1985, 18 years after the IBM 360, which was not quite two decades earlier.
– Dan Neely
8 hours ago
add a comment |
2 Answers
2
active
oldest
votes
Full, hardware-assisted virtualisation, with the intention of supporting hypervisors running operating systems without requiring para-virtualisation, was added to micro-processors relatively recently. (Many RISC-style architectures were virtualisable following Popek and Goldberg’s criteria, and were used in high-end partitionable systems, but with external support.)
The main architectures which support full hardware-assisted virtualisation are the following:
- SPARC, starting with the
sun4v
architecture, first implemented in the UltraSPARC T1 in 2005; - Power, starting with ISA 2.06 in 2009, implemented for example in POWER7 (earlier Power ISAs supported virtualisation, with a specific hypervisor mode, but 2.06 introduced significant virtualisation acceleration features);
- Itanium, starting with Poulson in 2012;
- x86, starting with Intel VT-x in 2005 and AMD-V in 2006, with additional features in later architectures such as VT-d and extended page tables in 2008, unrestricted guests in 2010, and VMCS shadowing in 2013.
So arguably the first micro-processor with full virtualisation was the UltraSPARC in 2005 — I’ll ignore VT-x on its own since it wasn’t all that useful in practice for real-world implementations.
An exact answer depends on the exact definition of full virtualisation; earlier CPUs (in particular, MIPS and PowerPCs) had good virtualisation support with the help of software emulation. In the early 2000s, as CPU speeds accelerated markedly, the performance cost of software virtualisation dropped, and virtualisation software became usable without hardware assistance in many scenarios. If we consider that virtualisation should cover the whole CPU, hardware-assisted virtualisation accelerators included, then I think the answer is Haswell in 2013 since that’s the first architecture (as far as I’m aware) to support full hardware-assisted nested hypervisors.
Wasn't the early 2000s when CPU speeds stopped "accelerat[ing] markedly"? (cf. the famous paper "the free lunch is over" from 2005, recognizing that this had happened. "Arguably, the free lunch has already been over for a year or two, only we’re just now noticing.")
– Mason Wheeler
4 hours ago
add a comment |
What does full virtualization mean in this context?
I guess a more general approach may be helpful.
First off, as soon as virtualization leaves the topic of the (core) CPU, anything becomes machine and implementation specific - so it's not only relying on the CPU itself. Further, even such a virtualization does usually need an hypervisor, another OS, providing real world services to these - and being able to emulate next to all external resources. More often than not, such a 'Guest OS' does provide hypervisor specific drivers in addition, avoiding extreme performance impacts.
At the heart it's about the 'picture' a 'bare metal' (*1) application gets of the machine it's running on one side, and what kind of hardware access the guest OS does expect.
Despite facing the danger promised, I would like to cite the /360 as a more simple example to display this fact. Already from the start I/O was quite formalized as an interface to an IO processor (*2). There was a limited set on instructions to start an I/O operation, check for status and cancel them if necessary, but no other way to communicate with the outside. These high level structures made it easy to virtualize I/O, after memory and CPU were done (*3)
For micros (and minis) the task was a way more complex one as their low level I/O. Low level was not a huge pile of separate instructions manipulation I/O addresses that needed to be interpreted in a coherent way.
With all that said and for 8 bit microprocessors I'd give a 6809 in a system with a 6829 MMU (and maybe a 6828 PIC (Priority Interrupt Controller)) a head-start in 1978 (*4). Depending on hardware and OS structure this would work out fine.
In reality the issue is as so often not about technology, but the need for solutions. With micros there was no use case. Where mainframes focused, during the 80s and 90s, quite a lot on machine virtualization to consolidate installations, micros spread just as they were. 'Lesser' ways of virtualization did provide everything to satisfy the need for executing parallel applications and services in a sufficiently separated way. Much like with mainframes before, adding more CPUs was more important than to make virtual machines share a real one.
During the 90s micros had taken on more and more roles as servers, spread out in companies, 'infecting' every corner. This resulted in high pressure to consolidate. Servers where moved to computing centres and migrated into rack mount machines. While many of these applications did need their custom environment, they did use only small portions of a machine's resources. At the same time the early 2000s brought new highly partitioned applications with services in prior unimaginable numbers for high throughput web servers, search engines and alike. In combination this created a surge for virtualization, a need CPU manufacturer satisfied with new models. First with server processors, where the use case originated, in the long run for everyone.
The rest is history and I'd rather shut up and point to Stephen Kitt's detailed answer.
... Aaaargh ... I can't.
The 80386 did offer a 8086 VM mode that enabled hosting of multiple instances of its predecessor. The important part here is that it not only restricted access to 'unknown' register, and offered separate memory spaces but also allowed to trap access to (marked) memory and instructions like IN and OUT. As a result a 386 hypervisor could run multiple virtual 8086 instances, alas not emulating the PC running on, but any 8086 machine.
Something not uncommon in other families of CPUs, where new models not only offered (some) compatibility, but also full emulation - as it was called back then. Constructions like a /370 running a virtual /360 running a virtual 1401 where not unheard of.
*1 - Term chosen in lieu of any better. Meaning is that an application that can act as if running on the same (or similar) machine as if there was no virtualization.
*2 - Much like Intel envisioned for the x86 family with the 8089 I/O-Processor
*3 - Another great example how benefiting clean abstraction layers are in the long run.
*4 - IIRC it may as well have worked already with 6800+6829, but I'm unsure abut the introduction date of the 6829. It was available when the 6809 was introduced.
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "648"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9098%2fwhat-was-the-first-microprocessor-to-support-full-virtualization%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
Full, hardware-assisted virtualisation, with the intention of supporting hypervisors running operating systems without requiring para-virtualisation, was added to micro-processors relatively recently. (Many RISC-style architectures were virtualisable following Popek and Goldberg’s criteria, and were used in high-end partitionable systems, but with external support.)
The main architectures which support full hardware-assisted virtualisation are the following:
- SPARC, starting with the
sun4v
architecture, first implemented in the UltraSPARC T1 in 2005; - Power, starting with ISA 2.06 in 2009, implemented for example in POWER7 (earlier Power ISAs supported virtualisation, with a specific hypervisor mode, but 2.06 introduced significant virtualisation acceleration features);
- Itanium, starting with Poulson in 2012;
- x86, starting with Intel VT-x in 2005 and AMD-V in 2006, with additional features in later architectures such as VT-d and extended page tables in 2008, unrestricted guests in 2010, and VMCS shadowing in 2013.
So arguably the first micro-processor with full virtualisation was the UltraSPARC in 2005 — I’ll ignore VT-x on its own since it wasn’t all that useful in practice for real-world implementations.
An exact answer depends on the exact definition of full virtualisation; earlier CPUs (in particular, MIPS and PowerPCs) had good virtualisation support with the help of software emulation. In the early 2000s, as CPU speeds accelerated markedly, the performance cost of software virtualisation dropped, and virtualisation software became usable without hardware assistance in many scenarios. If we consider that virtualisation should cover the whole CPU, hardware-assisted virtualisation accelerators included, then I think the answer is Haswell in 2013 since that’s the first architecture (as far as I’m aware) to support full hardware-assisted nested hypervisors.
Wasn't the early 2000s when CPU speeds stopped "accelerat[ing] markedly"? (cf. the famous paper "the free lunch is over" from 2005, recognizing that this had happened. "Arguably, the free lunch has already been over for a year or two, only we’re just now noticing.")
– Mason Wheeler
4 hours ago
add a comment |
Full, hardware-assisted virtualisation, with the intention of supporting hypervisors running operating systems without requiring para-virtualisation, was added to micro-processors relatively recently. (Many RISC-style architectures were virtualisable following Popek and Goldberg’s criteria, and were used in high-end partitionable systems, but with external support.)
The main architectures which support full hardware-assisted virtualisation are the following:
- SPARC, starting with the
sun4v
architecture, first implemented in the UltraSPARC T1 in 2005; - Power, starting with ISA 2.06 in 2009, implemented for example in POWER7 (earlier Power ISAs supported virtualisation, with a specific hypervisor mode, but 2.06 introduced significant virtualisation acceleration features);
- Itanium, starting with Poulson in 2012;
- x86, starting with Intel VT-x in 2005 and AMD-V in 2006, with additional features in later architectures such as VT-d and extended page tables in 2008, unrestricted guests in 2010, and VMCS shadowing in 2013.
So arguably the first micro-processor with full virtualisation was the UltraSPARC in 2005 — I’ll ignore VT-x on its own since it wasn’t all that useful in practice for real-world implementations.
An exact answer depends on the exact definition of full virtualisation; earlier CPUs (in particular, MIPS and PowerPCs) had good virtualisation support with the help of software emulation. In the early 2000s, as CPU speeds accelerated markedly, the performance cost of software virtualisation dropped, and virtualisation software became usable without hardware assistance in many scenarios. If we consider that virtualisation should cover the whole CPU, hardware-assisted virtualisation accelerators included, then I think the answer is Haswell in 2013 since that’s the first architecture (as far as I’m aware) to support full hardware-assisted nested hypervisors.
Wasn't the early 2000s when CPU speeds stopped "accelerat[ing] markedly"? (cf. the famous paper "the free lunch is over" from 2005, recognizing that this had happened. "Arguably, the free lunch has already been over for a year or two, only we’re just now noticing.")
– Mason Wheeler
4 hours ago
add a comment |
Full, hardware-assisted virtualisation, with the intention of supporting hypervisors running operating systems without requiring para-virtualisation, was added to micro-processors relatively recently. (Many RISC-style architectures were virtualisable following Popek and Goldberg’s criteria, and were used in high-end partitionable systems, but with external support.)
The main architectures which support full hardware-assisted virtualisation are the following:
- SPARC, starting with the
sun4v
architecture, first implemented in the UltraSPARC T1 in 2005; - Power, starting with ISA 2.06 in 2009, implemented for example in POWER7 (earlier Power ISAs supported virtualisation, with a specific hypervisor mode, but 2.06 introduced significant virtualisation acceleration features);
- Itanium, starting with Poulson in 2012;
- x86, starting with Intel VT-x in 2005 and AMD-V in 2006, with additional features in later architectures such as VT-d and extended page tables in 2008, unrestricted guests in 2010, and VMCS shadowing in 2013.
So arguably the first micro-processor with full virtualisation was the UltraSPARC in 2005 — I’ll ignore VT-x on its own since it wasn’t all that useful in practice for real-world implementations.
An exact answer depends on the exact definition of full virtualisation; earlier CPUs (in particular, MIPS and PowerPCs) had good virtualisation support with the help of software emulation. In the early 2000s, as CPU speeds accelerated markedly, the performance cost of software virtualisation dropped, and virtualisation software became usable without hardware assistance in many scenarios. If we consider that virtualisation should cover the whole CPU, hardware-assisted virtualisation accelerators included, then I think the answer is Haswell in 2013 since that’s the first architecture (as far as I’m aware) to support full hardware-assisted nested hypervisors.
Full, hardware-assisted virtualisation, with the intention of supporting hypervisors running operating systems without requiring para-virtualisation, was added to micro-processors relatively recently. (Many RISC-style architectures were virtualisable following Popek and Goldberg’s criteria, and were used in high-end partitionable systems, but with external support.)
The main architectures which support full hardware-assisted virtualisation are the following:
- SPARC, starting with the
sun4v
architecture, first implemented in the UltraSPARC T1 in 2005; - Power, starting with ISA 2.06 in 2009, implemented for example in POWER7 (earlier Power ISAs supported virtualisation, with a specific hypervisor mode, but 2.06 introduced significant virtualisation acceleration features);
- Itanium, starting with Poulson in 2012;
- x86, starting with Intel VT-x in 2005 and AMD-V in 2006, with additional features in later architectures such as VT-d and extended page tables in 2008, unrestricted guests in 2010, and VMCS shadowing in 2013.
So arguably the first micro-processor with full virtualisation was the UltraSPARC in 2005 — I’ll ignore VT-x on its own since it wasn’t all that useful in practice for real-world implementations.
An exact answer depends on the exact definition of full virtualisation; earlier CPUs (in particular, MIPS and PowerPCs) had good virtualisation support with the help of software emulation. In the early 2000s, as CPU speeds accelerated markedly, the performance cost of software virtualisation dropped, and virtualisation software became usable without hardware assistance in many scenarios. If we consider that virtualisation should cover the whole CPU, hardware-assisted virtualisation accelerators included, then I think the answer is Haswell in 2013 since that’s the first architecture (as far as I’m aware) to support full hardware-assisted nested hypervisors.
edited 11 hours ago
answered 12 hours ago
Stephen KittStephen Kitt
37.3k8151164
37.3k8151164
Wasn't the early 2000s when CPU speeds stopped "accelerat[ing] markedly"? (cf. the famous paper "the free lunch is over" from 2005, recognizing that this had happened. "Arguably, the free lunch has already been over for a year or two, only we’re just now noticing.")
– Mason Wheeler
4 hours ago
add a comment |
Wasn't the early 2000s when CPU speeds stopped "accelerat[ing] markedly"? (cf. the famous paper "the free lunch is over" from 2005, recognizing that this had happened. "Arguably, the free lunch has already been over for a year or two, only we’re just now noticing.")
– Mason Wheeler
4 hours ago
Wasn't the early 2000s when CPU speeds stopped "accelerat[ing] markedly"? (cf. the famous paper "the free lunch is over" from 2005, recognizing that this had happened. "Arguably, the free lunch has already been over for a year or two, only we’re just now noticing.")
– Mason Wheeler
4 hours ago
Wasn't the early 2000s when CPU speeds stopped "accelerat[ing] markedly"? (cf. the famous paper "the free lunch is over" from 2005, recognizing that this had happened. "Arguably, the free lunch has already been over for a year or two, only we’re just now noticing.")
– Mason Wheeler
4 hours ago
add a comment |
What does full virtualization mean in this context?
I guess a more general approach may be helpful.
First off, as soon as virtualization leaves the topic of the (core) CPU, anything becomes machine and implementation specific - so it's not only relying on the CPU itself. Further, even such a virtualization does usually need an hypervisor, another OS, providing real world services to these - and being able to emulate next to all external resources. More often than not, such a 'Guest OS' does provide hypervisor specific drivers in addition, avoiding extreme performance impacts.
At the heart it's about the 'picture' a 'bare metal' (*1) application gets of the machine it's running on one side, and what kind of hardware access the guest OS does expect.
Despite facing the danger promised, I would like to cite the /360 as a more simple example to display this fact. Already from the start I/O was quite formalized as an interface to an IO processor (*2). There was a limited set on instructions to start an I/O operation, check for status and cancel them if necessary, but no other way to communicate with the outside. These high level structures made it easy to virtualize I/O, after memory and CPU were done (*3)
For micros (and minis) the task was a way more complex one as their low level I/O. Low level was not a huge pile of separate instructions manipulation I/O addresses that needed to be interpreted in a coherent way.
With all that said and for 8 bit microprocessors I'd give a 6809 in a system with a 6829 MMU (and maybe a 6828 PIC (Priority Interrupt Controller)) a head-start in 1978 (*4). Depending on hardware and OS structure this would work out fine.
In reality the issue is as so often not about technology, but the need for solutions. With micros there was no use case. Where mainframes focused, during the 80s and 90s, quite a lot on machine virtualization to consolidate installations, micros spread just as they were. 'Lesser' ways of virtualization did provide everything to satisfy the need for executing parallel applications and services in a sufficiently separated way. Much like with mainframes before, adding more CPUs was more important than to make virtual machines share a real one.
During the 90s micros had taken on more and more roles as servers, spread out in companies, 'infecting' every corner. This resulted in high pressure to consolidate. Servers where moved to computing centres and migrated into rack mount machines. While many of these applications did need their custom environment, they did use only small portions of a machine's resources. At the same time the early 2000s brought new highly partitioned applications with services in prior unimaginable numbers for high throughput web servers, search engines and alike. In combination this created a surge for virtualization, a need CPU manufacturer satisfied with new models. First with server processors, where the use case originated, in the long run for everyone.
The rest is history and I'd rather shut up and point to Stephen Kitt's detailed answer.
... Aaaargh ... I can't.
The 80386 did offer a 8086 VM mode that enabled hosting of multiple instances of its predecessor. The important part here is that it not only restricted access to 'unknown' register, and offered separate memory spaces but also allowed to trap access to (marked) memory and instructions like IN and OUT. As a result a 386 hypervisor could run multiple virtual 8086 instances, alas not emulating the PC running on, but any 8086 machine.
Something not uncommon in other families of CPUs, where new models not only offered (some) compatibility, but also full emulation - as it was called back then. Constructions like a /370 running a virtual /360 running a virtual 1401 where not unheard of.
*1 - Term chosen in lieu of any better. Meaning is that an application that can act as if running on the same (or similar) machine as if there was no virtualization.
*2 - Much like Intel envisioned for the x86 family with the 8089 I/O-Processor
*3 - Another great example how benefiting clean abstraction layers are in the long run.
*4 - IIRC it may as well have worked already with 6800+6829, but I'm unsure abut the introduction date of the 6829. It was available when the 6809 was introduced.
add a comment |
What does full virtualization mean in this context?
I guess a more general approach may be helpful.
First off, as soon as virtualization leaves the topic of the (core) CPU, anything becomes machine and implementation specific - so it's not only relying on the CPU itself. Further, even such a virtualization does usually need an hypervisor, another OS, providing real world services to these - and being able to emulate next to all external resources. More often than not, such a 'Guest OS' does provide hypervisor specific drivers in addition, avoiding extreme performance impacts.
At the heart it's about the 'picture' a 'bare metal' (*1) application gets of the machine it's running on one side, and what kind of hardware access the guest OS does expect.
Despite facing the danger promised, I would like to cite the /360 as a more simple example to display this fact. Already from the start I/O was quite formalized as an interface to an IO processor (*2). There was a limited set on instructions to start an I/O operation, check for status and cancel them if necessary, but no other way to communicate with the outside. These high level structures made it easy to virtualize I/O, after memory and CPU were done (*3)
For micros (and minis) the task was a way more complex one as their low level I/O. Low level was not a huge pile of separate instructions manipulation I/O addresses that needed to be interpreted in a coherent way.
With all that said and for 8 bit microprocessors I'd give a 6809 in a system with a 6829 MMU (and maybe a 6828 PIC (Priority Interrupt Controller)) a head-start in 1978 (*4). Depending on hardware and OS structure this would work out fine.
In reality the issue is as so often not about technology, but the need for solutions. With micros there was no use case. Where mainframes focused, during the 80s and 90s, quite a lot on machine virtualization to consolidate installations, micros spread just as they were. 'Lesser' ways of virtualization did provide everything to satisfy the need for executing parallel applications and services in a sufficiently separated way. Much like with mainframes before, adding more CPUs was more important than to make virtual machines share a real one.
During the 90s micros had taken on more and more roles as servers, spread out in companies, 'infecting' every corner. This resulted in high pressure to consolidate. Servers where moved to computing centres and migrated into rack mount machines. While many of these applications did need their custom environment, they did use only small portions of a machine's resources. At the same time the early 2000s brought new highly partitioned applications with services in prior unimaginable numbers for high throughput web servers, search engines and alike. In combination this created a surge for virtualization, a need CPU manufacturer satisfied with new models. First with server processors, where the use case originated, in the long run for everyone.
The rest is history and I'd rather shut up and point to Stephen Kitt's detailed answer.
... Aaaargh ... I can't.
The 80386 did offer a 8086 VM mode that enabled hosting of multiple instances of its predecessor. The important part here is that it not only restricted access to 'unknown' register, and offered separate memory spaces but also allowed to trap access to (marked) memory and instructions like IN and OUT. As a result a 386 hypervisor could run multiple virtual 8086 instances, alas not emulating the PC running on, but any 8086 machine.
Something not uncommon in other families of CPUs, where new models not only offered (some) compatibility, but also full emulation - as it was called back then. Constructions like a /370 running a virtual /360 running a virtual 1401 where not unheard of.
*1 - Term chosen in lieu of any better. Meaning is that an application that can act as if running on the same (or similar) machine as if there was no virtualization.
*2 - Much like Intel envisioned for the x86 family with the 8089 I/O-Processor
*3 - Another great example how benefiting clean abstraction layers are in the long run.
*4 - IIRC it may as well have worked already with 6800+6829, but I'm unsure abut the introduction date of the 6829. It was available when the 6809 was introduced.
add a comment |
What does full virtualization mean in this context?
I guess a more general approach may be helpful.
First off, as soon as virtualization leaves the topic of the (core) CPU, anything becomes machine and implementation specific - so it's not only relying on the CPU itself. Further, even such a virtualization does usually need an hypervisor, another OS, providing real world services to these - and being able to emulate next to all external resources. More often than not, such a 'Guest OS' does provide hypervisor specific drivers in addition, avoiding extreme performance impacts.
At the heart it's about the 'picture' a 'bare metal' (*1) application gets of the machine it's running on one side, and what kind of hardware access the guest OS does expect.
Despite facing the danger promised, I would like to cite the /360 as a more simple example to display this fact. Already from the start I/O was quite formalized as an interface to an IO processor (*2). There was a limited set on instructions to start an I/O operation, check for status and cancel them if necessary, but no other way to communicate with the outside. These high level structures made it easy to virtualize I/O, after memory and CPU were done (*3)
For micros (and minis) the task was a way more complex one as their low level I/O. Low level was not a huge pile of separate instructions manipulation I/O addresses that needed to be interpreted in a coherent way.
With all that said and for 8 bit microprocessors I'd give a 6809 in a system with a 6829 MMU (and maybe a 6828 PIC (Priority Interrupt Controller)) a head-start in 1978 (*4). Depending on hardware and OS structure this would work out fine.
In reality the issue is as so often not about technology, but the need for solutions. With micros there was no use case. Where mainframes focused, during the 80s and 90s, quite a lot on machine virtualization to consolidate installations, micros spread just as they were. 'Lesser' ways of virtualization did provide everything to satisfy the need for executing parallel applications and services in a sufficiently separated way. Much like with mainframes before, adding more CPUs was more important than to make virtual machines share a real one.
During the 90s micros had taken on more and more roles as servers, spread out in companies, 'infecting' every corner. This resulted in high pressure to consolidate. Servers where moved to computing centres and migrated into rack mount machines. While many of these applications did need their custom environment, they did use only small portions of a machine's resources. At the same time the early 2000s brought new highly partitioned applications with services in prior unimaginable numbers for high throughput web servers, search engines and alike. In combination this created a surge for virtualization, a need CPU manufacturer satisfied with new models. First with server processors, where the use case originated, in the long run for everyone.
The rest is history and I'd rather shut up and point to Stephen Kitt's detailed answer.
... Aaaargh ... I can't.
The 80386 did offer a 8086 VM mode that enabled hosting of multiple instances of its predecessor. The important part here is that it not only restricted access to 'unknown' register, and offered separate memory spaces but also allowed to trap access to (marked) memory and instructions like IN and OUT. As a result a 386 hypervisor could run multiple virtual 8086 instances, alas not emulating the PC running on, but any 8086 machine.
Something not uncommon in other families of CPUs, where new models not only offered (some) compatibility, but also full emulation - as it was called back then. Constructions like a /370 running a virtual /360 running a virtual 1401 where not unheard of.
*1 - Term chosen in lieu of any better. Meaning is that an application that can act as if running on the same (or similar) machine as if there was no virtualization.
*2 - Much like Intel envisioned for the x86 family with the 8089 I/O-Processor
*3 - Another great example how benefiting clean abstraction layers are in the long run.
*4 - IIRC it may as well have worked already with 6800+6829, but I'm unsure abut the introduction date of the 6829. It was available when the 6809 was introduced.
What does full virtualization mean in this context?
I guess a more general approach may be helpful.
First off, as soon as virtualization leaves the topic of the (core) CPU, anything becomes machine and implementation specific - so it's not only relying on the CPU itself. Further, even such a virtualization does usually need an hypervisor, another OS, providing real world services to these - and being able to emulate next to all external resources. More often than not, such a 'Guest OS' does provide hypervisor specific drivers in addition, avoiding extreme performance impacts.
At the heart it's about the 'picture' a 'bare metal' (*1) application gets of the machine it's running on one side, and what kind of hardware access the guest OS does expect.
Despite facing the danger promised, I would like to cite the /360 as a more simple example to display this fact. Already from the start I/O was quite formalized as an interface to an IO processor (*2). There was a limited set on instructions to start an I/O operation, check for status and cancel them if necessary, but no other way to communicate with the outside. These high level structures made it easy to virtualize I/O, after memory and CPU were done (*3)
For micros (and minis) the task was a way more complex one as their low level I/O. Low level was not a huge pile of separate instructions manipulation I/O addresses that needed to be interpreted in a coherent way.
With all that said and for 8 bit microprocessors I'd give a 6809 in a system with a 6829 MMU (and maybe a 6828 PIC (Priority Interrupt Controller)) a head-start in 1978 (*4). Depending on hardware and OS structure this would work out fine.
In reality the issue is as so often not about technology, but the need for solutions. With micros there was no use case. Where mainframes focused, during the 80s and 90s, quite a lot on machine virtualization to consolidate installations, micros spread just as they were. 'Lesser' ways of virtualization did provide everything to satisfy the need for executing parallel applications and services in a sufficiently separated way. Much like with mainframes before, adding more CPUs was more important than to make virtual machines share a real one.
During the 90s micros had taken on more and more roles as servers, spread out in companies, 'infecting' every corner. This resulted in high pressure to consolidate. Servers where moved to computing centres and migrated into rack mount machines. While many of these applications did need their custom environment, they did use only small portions of a machine's resources. At the same time the early 2000s brought new highly partitioned applications with services in prior unimaginable numbers for high throughput web servers, search engines and alike. In combination this created a surge for virtualization, a need CPU manufacturer satisfied with new models. First with server processors, where the use case originated, in the long run for everyone.
The rest is history and I'd rather shut up and point to Stephen Kitt's detailed answer.
... Aaaargh ... I can't.
The 80386 did offer a 8086 VM mode that enabled hosting of multiple instances of its predecessor. The important part here is that it not only restricted access to 'unknown' register, and offered separate memory spaces but also allowed to trap access to (marked) memory and instructions like IN and OUT. As a result a 386 hypervisor could run multiple virtual 8086 instances, alas not emulating the PC running on, but any 8086 machine.
Something not uncommon in other families of CPUs, where new models not only offered (some) compatibility, but also full emulation - as it was called back then. Constructions like a /370 running a virtual /360 running a virtual 1401 where not unheard of.
*1 - Term chosen in lieu of any better. Meaning is that an application that can act as if running on the same (or similar) machine as if there was no virtualization.
*2 - Much like Intel envisioned for the x86 family with the 8089 I/O-Processor
*3 - Another great example how benefiting clean abstraction layers are in the long run.
*4 - IIRC it may as well have worked already with 6800+6829, but I'm unsure abut the introduction date of the 6829. It was available when the 6809 was introduced.
edited 7 hours ago
LangLangC
5161211
5161211
answered 8 hours ago
RaffzahnRaffzahn
50.1k6115202
50.1k6115202
add a comment |
add a comment |
Thanks for contributing an answer to Retrocomputing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9098%2fwhat-was-the-first-microprocessor-to-support-full-virtualization%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
"The intent of specifying a microprocessor is to omit systems like the IBM 360, which usually turn out to have done these things a decade or two earlier." – You're off by 2-3 decades ;-) Virtualization was supported by the addition of the Dynamic Address Translation Unit in the S/360-67 in 1967.
– Jörg W Mittag
9 hours ago
3
@JörgWMittag I see nothing wrong with the statement in the question. The 80386 came out in 1985, 18 years after the IBM 360, which was not quite two decades earlier.
– Dan Neely
8 hours ago