The 360 ran a PowerPC instruction set while the Original Xbox and Xbox One run x86 architecture. The game would have to be rewritten to run on Xbox One.
You can convert instruction sets on the fly, the technology has been around for a while. One big commercial failure that was partly relying on this was Transmeta and their Code Morphing Software: http://en.wikipedia.org/wiki/Code_Morphing_Software
That's the whole point of a VM right? OS X could run PowerPC apps with Rosetta when they went Intel. I just assumed that we could do the same since the Xbox One has a beastly processor compared to the aged 360 one.
PPC OS X apps didn't need any low-level system emulation; they could only talk to hardware via the kernel, and they could only talk to the kernel via system libraries. So at kernel boundaries you could simply translate the PPC system call to x86.
Whereas basically all consoles expose low-level system stuff to the game, such as memory-mapped IO, low-level GPU commands, TLBs, DMA, etc. All of this can't be emulated without a large amount of overhead.
At least 6th and especially 7th generation consoles have such complex timings that emulating the clocks of different parts isn't generally needed. And it's possible that Xbox 360 and PS3 require games to go through their kernel for a lot of hardware access; I don't know.
Most VMs aren't full architecture emulation. Instead they use a trap to prevent some instructions from executing, but the rest are passed through as native instructions. Breaking out of this virtualized environment could potentially pose a security risk, and it is possible to detect that you are in a virtualized host because timing wasn't as exact as for these intercepted calls as they are on actual hardware.
Intel and AMD added additional instructions that reduced the chance that information running on one VM couldn't be leaked to another VM. This is the foundation of Hypervisors, which are low level systems designed specifically for giving a managed interface for the host OS, but importantly these are still virtualized VMs meaning the computer architecture is still the same as the hardware. (a little hand-wavey but correct enough for the sake of discussion)
Transmeta designed cores that weren't strictly x86 for example, but the technology is more like RISC vs. CISC. By transforming the CISC instructions into equivalent RISC instructions on the fly, the underlying processor is RISC. This is already true of (almost?) every modern CPU, they execute micro code. Transmeta was one of the first to do this. I'm not sure, but Transmeta may have performed instruction reordering in their pipeline at the micro code level whereas others did this at the opcode level. I'm not aware of any instance where they used this to simultaneously provide multiple architectures on the same silicon, although at a glance it seems plausible. It would have been very expensive to build multiple ISAs into the same core, especially if the demand for such technology is nonexistent. By scraping the transistors that would have been used to support multiple ISAs, you can use that space for better pipelines, SIMD, multiple cores, or simply increase the yield, conserve power, and/or make the processor more efficient.
Any of those options would be better, so I don't believe any of these mythical multi-ISA processors exist.
The bottom line is that for the Xbox One to support Xbox 360 code, they would have to emulate everything and there simply aren't enough CPU cycles to make that happen.
Since I'm on a roll, the biggest disappointment was that the Xbox 360 didn't emulate the Play Station. Now obviously the Xbox 360 is made by Microsoft and the PS is made by Sony, but the idea isn't so extreme. A company called Conecttix [1] created a PS emulator for the Mac. The Mac was using the same ISA as the PS, so the emulator only had to emulate the BIOS and peripherals. Sony took them to court and lost. The pivotal piece was that Microsoft bought Connectix and a part of that company lives on in the Virtual PC virtualization software now made by Microsoft. Sony apparently bought the PS emulator and killed it, but imagine if that had gone to Microsoft instead? The Xbox 360 uses the same ISA, so in theory it could have also run a 360 version of VGS. Gamers who didn't have a PS2 might have been able to play their PS games on Microsoft hardware. Microsoft would have gotten hardware sales and Sony would have received money for game licenses.
For this generation, Microsoft would have done well for itself by acquiring OnLive or building out its own server side gaming system as Sony has done by purchasing Gaiki. This would have given the Xbox One the ability to play Xbox 360 games over a remote desktop type of link. I think if the public backlash against the online offerings wasn't so boisterous, we may have seen a service like that at launch instead of the watered down version they scrambled to produce.
The key future proofing component of Xbox One is the ability to run parts of the game in the cloud. This is why the slower core of the Xbox One shouldn't be seen as limiting. Games can be written to push complex calculations to a server farm while the local core handles more pedestrian chores. Extending that idea further, we may see Xbox 360 emulation yet. The Xbox One is posed to win the battle this generation if these long term strategies are given time to mature and be fully realized. The PS4 has some short term appeal, but the gap between Microsoft and Sony isn't as wide as the gap between those two companies and Nintendo.
Nope on Connectix VGS not emulating the CPU - PSX and PS2 both used MIPS CPUs, vs. PowerPC in Mac. It very much was the exact same thing as PCSX or derivates.
The only platform a PSX emulator might have not done full dynarec/interpretation would have been the PSP, and that's unlikely to actually exist for various reasons.
you mean like the POPS emulator that the PSP used to run PlayStation 1 games (to correct the common mistake, the PSX was a completely different Japanese console, which only had the PlayStation as one of its parts).
PSX was a codename for the PlayStation. They decided to reuse the name for their failed entertainment center, but the name predates it by almost a decade.
Right now nothing is using the cloud for game processing and the only big game that is pushing features other than save game backup is Titanfall which is getting dedicated servers for each game. Interesting discussion here about what Microsoft can do with cloud rendering
That is the whole point of an emulator, which is a type of VM. Rosetta was an emulator, however products like VMware are not -- they rely on processor features to execute real code in a sandbox.
Binary recompilation is hard, but it's been done (the 360 did this for the original Xbox, which was x86).
Things that kill you are graphics and sound; particularly texture formats (which you don't have the CPU horsepower to convert) and sound (the 360 has a ton of voices in hardware, and this is difficult to emulate in software).
Personally, I don't think it's impossible. But it'd take the right people a couple of years to make it actually work.
As I understand it, the Xbox One has a 360 API translation layer. There may be some nonportable assembly code around--there almost certainly is--but (speculative) most use of assembly I've seen in the real world tends to be developed alongside a "slow" C/C++ path which may not be so slow on the Xbox One.
It wouldn't be as purely simple as a recompilation, but it's a conceivable amount of work. Probably be more work than is worth it for many titles, but I'm surprised at least the Big Games don't have support.
its not very hard actually - you can map the PPC instruction set to the x86 one with a bit of framework around it. more trivially you can write a C program to perform the same functionality as the original CPU. what this means is that you don't have to re-write the game at all - you run the original compiled code, just not on hardware.
I did this many years ago when I learned about the existence of the x86 instruction set as a stepping stone towards understanding/making interpreters, virtual machines and compilers. I recommend anyone do it as a learning exercise.
This stuff is incredibly simple at its core but there is a common misconception because it is 'low level' that is is some how hard or complicated...
Once i knew it was just simple instructions, registers and a few flags coupled with a memory model it was obvious how to achieve... you write C functions for the various flavours of ADD, SUB, LEA, MOV, FSTP, ADDPS etc. by iterating through the stream of bytes and interpreting them in the same way as the CPU (this is always described in the CPU manual in my experience) you can call the right ones in the right sequence. you use some appropriate blob of memory for your registers, flags and other CPU state and some big array of bytes for your emulated memory...
this is what an emulator is at the simplest level, an interpreter for CPU instructions. (of course making the implementation of instructions might necessitate that you do more - e.g. emulating memory, BIOS or more...)
I'd assume this makes it difficult.