Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Microsoft 3D Movie Maker Source Code (github.com/microsoft)
630 points by aaronbrethorst on May 4, 2022 | hide | past | favorite | 214 comments


This is interesting at the bottom of the readme file

  Jez also offered this interesting BRender anecdote in an email:

  When Sam Littlewood designed BRender, he didn’t write the code. And then document it.  
  The way most things were built at the time.
  First, he wrote the manual.  The full documentation
  That served as the spec.  Then the coding started.


Some more detail on this:

The starting point for BRender was a sequence of software rendering experiments that grew from 'never doing it like our hand coded asm real mode x86 renderers ever again'. There were various diversions, eg: an achingly beautiful plane sweep that only rendered the visible parts of each output scanline, whilst murdering the cache so comprehensively that there are likely still lines waiting to be filled even now. Fortunately, to get the FX Fighter team going with something stable whilst debugging continued, I knocked up a Z buffer implementation, and was given a sharp lesson on caching when it blew all the previous attempts away (notably with consistency of performance).

Arriving at this point we figured there might be a product in it and looking at our own interaction with other libraries, it was clear that a manual was key to this. If I remember correctly, the API was a negotiation between myself proposing designs, and Crosbie Fitch documenting it - pushing back with ways to make things easier to explain.

It worked out very well, and we took it into later Argonaut hardware projects. The hardware guys had enough tools to do frame capture and replay from various games/apps at a low level, so were not desperate for a full software stack. The clients had very particular views about how APIs should look, depending on planned uses, in house style, compatibility etc. We would negotiate the API documentation back and forth - including lots of sample code.

This sample code was important, (and BRender would have benefited from more of this). I took lots of real use cases, then wrote proper code to implement them against the proposed APIs and included them in the docs as tutorial and example code. Importantly - they were a representative sampling of the anticipated uses (not just the easy ones), they were not 'handwavy' and included all appropriate resource management and error handling, and they had to read well on the page. As the API negotiation continued, so the examples and tutorials got updated.

This process also had the benefit that we only started investing in client specific software development once the project had got enough momentum (typically committing hardware NRE).


This is what I love about HN. Somebody quotes an almost 30 year old readme file, and the person mentioned in the readme chips in to tell the full story. Thank you Sam!


I really wish software was written this way these days.

These days I am always being rushed through sprints barely knowing what is going on.

What you described sounds like a wonderful way to actually engineer a great API.


It is! Although what I missed out from above is that whilst the API and it's documentation is the deliverable - we would, to the best of ability, try and minimise risk whist developing it. Eg: by doing small performance experiments, drawing on prior experience, thinking about the evolution of resources within the API whilst writing samples, and looking at how other APIs handle similar things (but be careful - they may have had to perform heroics to achieve their behaviour).

Whilst the official progress is top down, the duck's feet are working hard feeling along the bottom to check for obstacles.


I feel like sprints should be more relaxed: "Oh you might bleed over into the next week? Don't stress, that's fine. Can we help you get unblocked and how?"

Just because you say you want it in 2 weeks does not make it so. Don't over promise, you will be severely disappointed. Software breaks, it does not care for deadlines.


This how my team functions. It's such a refreshing work experience. Estimating work is hard, we shouldn't be punished or feel bad when we're off a bit.


That's kind of the way I write APIs, albeit at a smaller scale. First write the code of how I want to use the API, then write the the actual API. I remember a talk from the developers of the Python requests library mentioning this same process.


Yes, I do this too...its a different thing when you are talking about your own insular project or something at a smaller scale.

The thing I was talking about was larger scale projects like the ones mentioned by the author (of the sort I work on at my day job) where you have teams of stakeholders and outside influence combined with a desire to release rapidly in an iterative form.

Software these days is just done differently in those contexts...we can't slow down enough to do what was detailed above...I just wish the world was sorted in a way or slowed down enough to do REAL engineering.


I kinda work this way on my personal projects: first I come up with a rough idea of the UX, and only then do I start implementing it. The desired UX ends up driving the decisions about the implementation details and technicalities — not the other way around as it often happens in much of modern software. And I, of course, disregard the time component.


I seem to remember BRender being one of the highest-performance software renderers of the day. I used to talk to Jez on Usenet and he offered me an interview at Argonaut (I was 16 I think) after I showed him my hand-coded x86 renderer. I don't think I was doing any real mode by that time, though. I can't even remember what effect running in real mode would have outside of the memory limitations? You can still access the full 32bit registers, right? (been 25 years since I coded x86).


There were three similarish software renderers around at that time, BRender, Canon Renderware (Adam Billyard), and Rendermorphics RealityLab (Servan Keondjian). There were also some renderers out of the demo scene that were definitely faster (but often with perf. related limitations).

The code was mostly C, with some protected mode x86 for inner loops. Prior to that point, all our PC games had been real mode x86- and BRender was a result of moving away from that.


Yeah, I was coming from the demo scene side, although my code ended up in an Eidos game.

I dug into the released BRender source until I found the inner loops, then I "Nope!"'d all the way out and remembered why I never went back to assembler.

Thanks for the memories. I'm glad the code survived all these years until it could be released.


"hand coded asm real mode x86 renderers"

a more elegant weapon, from a more civilized age, may the forth be with you ;)


> ...whilst murdering the cache so comprehensively that there are likely still lines waiting to be filled even now.

Very evocative writing, I love it.


Is this really '4 days ago' as of 8-May-2022?

If it is, "Hello Sam!"


Anyway, from my recollection, it was a case of me wanting to produce a C++ API/wrapper around the C API, and realising that the extant documentation wasn't quite sufficient for me to do so (to fully grok what went on underneath the C API enough to produce a C++ wrapper). Therefore, producing a more thorough technical reference manual seemed to me to kill 2 birds: I'd obtain a sufficient understanding, and so would the customers. Thus although Jez was right that the manual came first, the process of producing a more thorough TRM gave rise to Sam's observations.


Long time! Maybe we should wait until Dan back over, get the old gang together, and see if we can finally agree on a restaurant.


Well, if the restaurant is in France, that's a possibility. ;-)

I still remember winning the 'Bog Roll d'Or' for being last man standing in the Phall eating competition.

Oh, and the Yaohan Plaza (aka Oriental City, Hendon).


This sounds a lot like Domain-Driven Design. Model first. Test with code. Validate with scenarios. Repeat.


That reminds me of Amazon's method "working backwards": https://www.productplan.com/glossary/working-backward-amazon...

You start e.g. with a press release and then make your way backwards to the user stories and backlog.

It's a very useful approach to think big and focus on the problem instead of the many little issues that need to be solved.

The problem is that programming is still knowledge work. That means you can not specify everything beforehand without doing the actual work. (Like writing a novel, which can not specified before.) The solution would imho be a good balance between description the big picture/desired result (good) and spec'ing out every little detail and screen in advance (bad).


I remember seeing a programming tutorial that used this approach as well fairly early in my software engineering education. I want to say it was part of the SICP MIT course but I could be confusing it with something else.

It was pitched as something like "wishful thinking programming" IIRC. First you write the highest-level "business logic" code that you want to write with made-up constructs that have no implementation, then only once you've established that it feels good to write code using those constructs do you implement them.

I remember this also being the same approach that was used in a CPU-building course I took (NAND 2 Tetris): start with the instruction set, then implement it.


Wishful thinking is mentioned several times in SICP, both in terms of to be implemented functions and in separation of concerns.

Oftentimes in the book, they will write out a function with reliance on a variety of other functions that haven't been written yet, but which show a blissfully declarative outline of exactly what the function does. Then you go and write the sub-functions.

Looking at the code after is quite nice, but it takes a bit to wrap your head around writing large swaths of code that can't run -- especially when you're used to writing REPL-driven code and consistently checking/"testing" it


This is an actual advantage of test-driven design; unit tests are bad for their stated purpose (they aren't good at testing things and are expensive to maintain), but because they make you write clients for your APIs before implementing them, you actually have some hope of knowing they're useful.

I think you should consider writing unit tests before coding but then delete them and then write regression tests after. Though, haven't tried this approach yet.


I've done some hobby projects this way, of course I've allowed myself to go back and adjust the documentation where the implementation would be easier or simpler from only slight alterations of the workflow.

It's been very pleasant, and allowed me to make good sense of what I was doing and where I was going.


Right now, at work, I am getting insane with a task of re implementing some web forms that talk to shinier API. No documentation "just read the previous ruby server side validation code and put it on the js front-end". So I am hunting for hints and people to to be sure the code does what it intended to do. And I get cup sized eyes when asking for documentation :D.


That's the _worst_ kind of specification. "It has to be like the old one, but with this differences".

That means you have to study the old code, which is of course poorly documented and probably buggy; and then try to reproduce it in a sane way.


Alternative idea: keep running the same Ruby code.

- https://github.com/ruby/ruby.wasm (compile Ruby to WASM)

- https://mame.github.io/emruby/ (example of Ruby compiled to WASM)

- https://opalrb.com/ (compile a subset of Ruby to JS)

This requires downloading several MB of code (and then caching it in-browser) so would be ideal for internal-only stuff.


Pretty cool. (I changed the languages for privacy reasons but your solution still stands)


Ah, well even if it's some super-obscure or internal thing, if it compiles down to C that doesn't mind POSIX you might still be able to Emscripten-ify it!

(Part of my thinking with this solution is to broadcast "you are asking me to do something that requires a solution this complex to wrangle the problem space effectively" to try to fend off similar inanity in the future, admittedly, perhaps I'm being a bit optimistic/unrealistic heh.)


You are lucky, I am reimplementing CRM functionality on a web api by looking at SQL Server Profiler!


Why wouldn't you reach for a decompiler first?

I guess perhaps you are working with some very obfuscated code, still if you are black box profiling code it might make sense to crack open that box as much as possible. Unless what you are working with is DRM-level "performs like cow dung in a blender" horrifically inefficient, I can't imagine it is obfuscated that badly.

At any rate, hopefully you are paid well for it, reverse engineering is difficult, to say the least.


They have the source code but they need someone to sign off, etc, etc (I am an external contractor) and meanwhile I started by doing what I could and now am nearly finished. Decompiling the old app would be hard because it is a VB application.


Holey moley, that's just plain cruel.


Haha, wtf :D.


We had an application running a business for 10 years. It started printing labels incorrectly. There was a bug in the code. The source code provided for the application was at least 5 years out of date. Documentation was null. Additionally, the variable names were TextBox1, TextBox2....etc.

I decompiled the app, documented the UI and which variable is what. Rewrote the whole app. It was quite painful but good learning experience. I would love to do another one.


Yeah I'm doing the same thing; I get by well enough with trying to read the old code and playing around with the old user interface, but there's hundreds of fields and thousands of features that I have to go through one at a time. Most validation is just a regular expression though, easy enough to port.


The concept of "Readme Driven Development" [1] seems similar, but smaller in scope. I've actually found it very pragmatic/productive to, especially for smaller libraries, start with writing the Readme.

1: https://tom.preston-werner.com/2010/08/23/readme-driven-deve...


This is an amazing, amazing skill. Do any university classes teach programmers how to think like this? Can it be taught?


It sounds a bit like the Waterfall model. It became highly unfashionable in the 90s, maybe even earlier.


It's really more of a proto-agile approach! I've never done the manual first personally, though I had a boss that had been on a project run that way early in his career who always wanted to do things that way again. The idea is to skip distilling requirements into specifications, engineering documentation, and user documentation. Instead you gather requirements AS user documentation, and only work backwards from there as necessary. Sort of like skipping all the specifications to V&V activities in a traditional systems engineering approach by writing tests first in test-driven-development.


My comparison to waterfall was a reductionistic take on the specifications up-front before coding starts aspect. If comparing it to the full waterfall process then it is definitely a fresh take!


This is pretty much how we were taught in college 15 years ago.

Actually implementing something is left to the final 30% of a project. One must first bore themselves to death with design and documentation before a single line of code is written.


And then discover that the entire foundation of the design can't work because of some critical logical flaw you didn't think about...


Something I missed above is that either prior, or concurrently, there will be performance experiments to check that things are staying sane, but these do not contribute to the later software development (except as a reference) and are likely not consistent with each other. The deliverable at this point is only documentation - getting it good as possible without any concern for impact on existing implementation. The goal is to get as far up the slippery pole as possible - inevitably things will slide back over future version iterations.


That describes most software design and development to a T. The reality is you'll hardly ever get the design right the first time around.


Yep - It was a pretty flawed methodology all told.


It's only flawed for beginners. With 10 years or more of experience, it's possible to design successfully based on that experience.


It is flawed for (otherwise) veterans too if they are trying to do something that they haven't done before so do not know what they do not know.

This really only works if you have done the same or something very very similar before and so you have practically no unknowns. Notice that the developer who did that commented above that they had already done similar work at the past.

Also related this quote about how Joe Armstrong (of Erlang fame) approached problems (from [0]):

> Joe wrote amazingly simple programs and he did so in a peculiar way. First he wrote down the program any old way just to get it out of his head. Then once it worked he would then immediately create a new directory program2 and write it again. He would repeat this process five or six times (program5, program6, ...) and each time he would understand the problem a little better and sense which parts of the program were essential enough to re-type. He thought this was the most natural thing in the world: of course you throw away the first few implementations, you didn't understand the problem when you wrote those!

[0] https://github.com/lukego/blog/issues/32


Sure if you're writing a similar program to one you've already written. But then it's just agile on a longer time scale!


I mean, it's not really an amazing skill in my opinion. What's amazing about writing a high level explanation of how your program works, maybe with a few simple diagrams to show the reasoning behind it, and then writing a more detailed specification of what the program should do? It's basically a more comprehensive version of first writing the flow chart of your program in pseudocode with attached comments and then implementing it.


It's easy to do it, but I have never really heard of anyone doing it successfully like this. Most programmers come up with a basic plan and then just start coding it and things evolve from there. As the saying going, "a battle plan never survives contact with the enemy."

I'm incredibly impressed with programmers that can visualize a large program and the end product actually resembles what they originally envisioned. That is a gift.


Can that really happen with Agile, which means requirements can change or new requirements can come in at any time?


You'd still start off with requirements - documentation - code, it's just on the understanding that any of these can change.


Sure, why not. Just change the documentation first.


By some aspect this is how you write an API.


I've managed to get it to build about half way;

1. In linux

2. Install wine 7.5 (other versions may work, this is what I used)

3. Create a 32-bit wineprefix on a *case-insensitive* ext4 drive (https://www.collabora.com/news-and-blog/blog/2020/08/27/usin...)

4. Download MSVC 2.0 (https://winworldpc.com/product/visual-c/2x), extract and install it into the 32-bit wineprefix - Make sure you use `wine32 winecfg` and set the OS to Windows 95

5. Copy the files to C:\3d inside the wineprefix

6. Enter command prompt from wine (`wine32 cmd`)

7. `set include=C:\MSVC20\INC`

8. `.\setvars.bat`

9. `nmake`

Unfortunately wine segfaults for me when compiling one of the BRender files :'(

EDIT: Now I have Bren and Engine building! Studio does not build, reports that it doesn't know how to generate 'C:\3d\kauai\obj\wins\utilglob.obj'. Probably something simple...


Wine dev, here. Feel free to throw me a wine log of that crash over email, I could maybe see if it's something obvious.


>case insensitive.

Just create a disk image and mkfs.ext4 it :)

   mkdir -p ~/wine/prefix

   truncate -s 10G ~/wine/wine.img
 
   mkfs.ext4 -flags -foo -bar ~/wine/wine.img


For the record, you don't need the case folding filesystem at all. Wine has been doing its own case insensitive lookups and comparisons for decades with the only problem being performance which is certainly not going to be a concern here.


That is such an incredibly good idea haha Thanks :)


i would use a real Win95,98,XP in virtualization, there could be many micro flaws using Wine with this old stuff :)


True! Might give this a go tomorrow. I'm kinda shocked it got as far as it did to be honest


How about using Proton? It should be a little more completed for some cases.


he should just use the original system - porting to newer system always happen AFTER getting it to build on the original platform


im also shocked :)


maybe a step by step upgrade from 2.x->4.x->5.x->6.x needed

there are subtile changes between the very early studio versions but when reaching VS2005/2008 everything is fine for upgrade


builds for me under fresh XP with VC++ 2.0 until some globutil.obj file could not be build by nmake


Ben Stone is also working on it - maybe join forces :)

https://github.com/benstone/Microsoft-3D-Movie-Maker

there are a few that started to port: https://github.com/microsoft/Microsoft-3D-Movie-Maker/networ...


Exactly the same issue I ended with. It might need some modifications. I'll have a dig



This is awesome on so many levels. The bottom part of the read me says Jez San of Argonaut agreed to MIT the BRender part of it.

Argonaut rang a bell, and sure enough - Jez developed Starglider, one of my all time favourite games on the Amiga.

I can still remember it today, nearly four decades later: It was the first time I visited a new friend, who was the first in our neighbourhood to get the Amiga. He fired up Starglider- and I was simply blown away.

My C64 didn't cut it anymore - I just had to get the Amiga. An involvement in the Norwegian demo scene followed. This later had a big impact on my career as a SWE; thanks very much for this, Jez, if you're here!


Oh yes, Starglider was stunning at the time.

But wireframe only, I think?

I think Starglider 2 was inspired in part by Zarch on the Archimedes, and the free Lander demo that came with every Archie.

https://www.youtube.com/watch?v=mFwpsb75omg

It is hard to put into words how amazing it was to see this running, live and playable, on a home computer in 1987.

Jez San also co-designed the SuperFX 3D accelerator chip in the SNES game Starfox. That evolved into the ARC RISC CPU family.

An ARC core was the Intel Management Engine chip in Intel CPUs before they switched to an x86 ME controller running Minix 3.

https://en.wikipedia.org/wiki/Intel_Management_Engine#Hardwa...

And that ARC core ran ThreadX, which is the same OS that runs on the GPU of every Raspberry Pi.

https://en.wikipedia.org/wiki/ThreadX

San offered me a job once, nearly 30Y ago. I should have accepted it. :-D


interesting anecdotes about argonaut design methodology... pt 2

we used the opposite approach to design the Super FX chip (we called it the Mario chip - likely the first ever gpu)

Carl & Pete would write code to do 3d math and render polygons... and write and re-write the code hundreds of times, using an imaginary assembler language. they would invent new instructions on the fly to let their code get tighter. they would keep coming up with new instructions, new registers, new ways of forming instructions and referencing registers etc... until the code was the tightest... and THEN, we designed a microprocessor from scratch to execute those instructions.

Also, likely the opposite way that microprocessors were designed... because (in the past) the designers had no idea what their cpus were going to be used for, whereas we knew what we wanted to achieve, so got to use software to drive the hardware design.

this was the first time that a cpu was able to be designed using programmable gate arrays. literally Ben & Rob could make a change to the hardware design - the netlist, and email it off to the usa, and actel would Fedex back the next day a working chip for us to try out. this was before you could program your own fpga's yourself (and Xilinx at the time wasn't large enough to fit a whole microprocessor). this iterative approach allowed us to design the optimum 'gpu' for the task.


Why did Argonaut have the ability to design early GPU hardware when it doesn't seem like anyone else had the interest, background knowledge or tools?

(And how did Nintendo find a company in the UK to work with considering the language barriers?)


Argonaut had several team members including Rick Clucas (CTO) who had hardware design experience as well as software. Rick's first hardware at Argonaut was a development cartridge for the SNES that allowed you to remote control it and download games from the pc development kit. This allowed us to be self sufficient and develop games earlier on SNES (and less expensively than others)

And Fuzzz (aka James Hakewill, now at Tesla AI) was designing silicon including The ARC.

its a pretty well known story as to how we got involved with Nintendo.

I went up to a senior Nintendo guy at a CES show and showed him some stuff that blew him away. I was on a flight to meet Mr Yamauchi in Japan a few days later.

We had Argonaut team members stationed inside Nintendo working directly for Mr Miyamoto, et al

our team back in UK did the tech stuff, and the team inside Nintendo built the games.

It was only the second time that Nintendo had worked with a company outside Japan (the first was Rare ltd)


From what has been said of the latter. Argonaut showed up to Nintendo's booth (E3, CES etc. I forget which one specifically)

They pulled out a Nintendo Gameboy and proceeded to put an unlicensed prototype game cartridge in it, which booted successfully even without Nintendo's approval

Nintendo was so impressed with the engineering talent they agreed to work with them thereafter


That is amazing. What was the design process like in terms of tooling? Schematic capture or was one of the proto HDLs in use?


I don't remember which tools the Super FX chip was designed with but maybe some other ex-Argonauts can chime in on that. I know it was pretty basic gate level stuff and wasn't designed with a high level language (Ben Cheese was an old skool chip designer)... we started using VHDL later on, when we designed the ARC etc.


@Dang, really strange to see the dead comments (sibling to this, and elsewhere) as they are 100% relevant and straight from the source.

If I missed something then apologies.


Yeah, I have no idea why all of JezSan's comments are gray.


Looks like it's been fixed! Kudos.


Question: where did you learn 3D rendering in the 1980’s? How was this knowledge disseminated? How much was “solved” at the time?


I learned 3d math and rendering from the textbooks of the time like Newman Sproul and Bruce Artwick books. and a lot of trial and error.

wrote my first 3d games on the bbc computer (1982-1983) and then got a Mac in 1984.. and did more 3d on the Mac. I started StarGlider on the Mac, written in 68000 code.. and then ported it to the Amiga and Atari ST when they came out. I had the first Amiga outside the USA.

https://www.amazon.co.uk/Principles-Interactive-Computer-Gra...

https://www.amazon.com/Applied-Concepts-Microcomputer-Graphi...


am sure you did fine without that job ;-)

btw, the arc core got used in lots of things that you never hear about... like in Sandisk memory cards (no relation), and various set top box chips (mpeg etc), and wifi chips (at the time, most intel wifi chips used arc)


guess I am here, now ;-)


It feels a bit surreal, small world and all that - so cheers! Many thanks again for your efforts, and pioneering work.

Even my (younger) brother remembered Starglider well; he's also gone on to do well for himself with a leading role in IT administration in the health sector.


Jez launched a crypto startup, FunFair, btw. Unfortunately, it failed.

https://funfair.io/


failed is a strong word. we've pivoted to something else. not every idea works out the way you want.

we thought that if we built a trustless casino technology that used smart contracts to power the games and generate provably fair random numbers so that no customer would ever have to play an unfair game.. and that no funds had to be held in the custody of the casino... that there'd be a market for that. we wuz wrong. the market doesn't seem to care about something that's non-custodial or guaranteed fair. we shuttered it about 18 months ago.

so we've pivoted. we're experimenting with NFTs, and driving games. And investing in other worthy projects.



Blown away foone managed to just ask for this... and get it... within a month timeframe. Gives me some hope we can get more ancient software open sourced in the future.


I think Foone has been asking about this for 15+ years at this point.

I'd love to see classic Notepad and MS Paint too.


I think that Notepad and MS Paint were part of the Windows 2000 or NT leaks.

But the most important part of notepad was the editing component, which was a standard win32 component, and wasn’t part of the leak. It’s also unlikely they’ll release it since it’s part of current windows.


The full XP (SP0?) source code leaked and is complete enough that you can almost build and install it as a working OS. It's still missing components (perhaps most importantly, winlogon) but there's a good chance the edit control is in there.

You can find torrents of the source code online and dig around in it if you don't mind breaking the law (or ruining your chances of being allowed to contribute to Wine or ReactOS in the future).


The edit control was part of the leaks. It's in editec.c and a few nearby files. Don't ask me how I know.


Thanks for the correction.


We do try.


Is there any chance that older versions of Windows will have their source code officially released?

They're already out there as source code leaks, but it would be nice to be able to view it legally and perhaps more completely, in a similar way to how we can view the .NET reference source.

I really enjoyed browsing the NT4 and 2000 leaks, it was fascinating to see how Windows works under the hood, and I was interested to see how well (or not) my understanding of the disassembly matched up with where it came from. It also helped massively with troubleshooting and understanding API surfaces, particularly the kernel side.

Could this ever happen, is there the political will internally to be even more open on the OS side?


I'm not from Microsoft, but I think they're probably working on stuff like that, it's just a monumental task at the start, since they've probably licensed many components from companies that probably aren't around anymore... so you'd have to chase current license holders, which can be super tricky.

Besides the fact that they have to scan the code, remove offensive bits, figure out if some code is still in use and it might have security vulnerabilities or could disclose things they don't want disclosed, etc.

I guess they're slowly open sourcing kind of stand alone components, to bank on nostalgia.

I wouldn't be surprised if 20 years from now all the Windows plumbing is open source (it's not a money maker for them, anyway), à la Darwin and MacOS, and they just keep the management layer proprietary: settings, shell, UI toolkit, etc.


Visual Basic 6.0? (I mostly kid, I know this has been enough of an ask for enough years if it was similarly trivial it would've probably happened by now.)

Thanks for your work on open sourcing awesome old Microsoft stuff!


Someone's attempting a recreation. "Rad Basic: 100% compatible with VB6"[1].

[1] https://www.radbasic.dev/ https://news.ycombinator.com/item?id=31202696 (3 points, 4 days ago, 0 comments)


Sigh I was just about to ask about MS-DOS 6.22, Windows 98, that sort of thing, and I think I figured out a bit more of the answer.

1. <Company> releases source code to old useless thing that's only of historical interest

2. Ginormous community forms around it (see also: retrocomputing in general, niche hyperfocused interest groups around old software like this)

3. Community basically picks old useless thing up off the ground and rocket-boosts it into being fun and genuinely usable

4. Now-useful thing becomes a way to do cool stuff; it is now relevant!!1

This phenomena can only work if one very specific thing is true: if the functionality made available by the software in question is closed-ended. DOS 3.30 is closed-ended; it doesn't support all the APIs of DOS 6. This software is also closed-ended, in the sense that there would be no real-world value gained from trying to make this specific codebase line up with contemporary standards (of code design, or render quality, etc etc); such a project would universally be easier to start from scratch, even if it was using this as a base for inspiration. Community effort to fix something like this without completely ship-of-Thesesus-ing it will get a decent bit of the way there, but (being a tad objective) it'll have a stopping point. I think this scope limitation is a key part of what makes it possible to release old stuff like this.

Win98 and VB6 and things of a similar class have both huge niche focus and interest by the retrocomputing community, AND they are fundamentally open in the sense that Win98 is an entire operating system, and VB6 is an entire development environment. The purpose of both products is fundamentally to enable. To enable you to run software and enable you to build it. So, couple all that potential with the reception the release would bring, and... oops, the community forked Win98 and made it run on the latest CPUs and added VM window resize support and started backporting random NT (and brand-new) APIs to make it do new and random stuff. And then they went and fixed all the bugs in VB6's runtime and added a JIT (!) and WASM support and ThEn ThE FaTeFuL GiThUb IsSuE #42 GoT OpEnEd WhErE SoMeOnE AcTuAlLY OuTlInEd HoW To SuCcEsSfUlLy PoRt iT tO LiNuX.

The other problem is that things like old versions of Windows or Visual Basic are of a scale that's an order of magnitude larger than smaller releases like the one presented here, making community enablement that much more impactful - maybe not to the full order of magnitude, but to an extent that is nontrivial... and might hit the tipping-point of virality that would really attract attention to the project and get it front-and-center into the pop-culture limelight.

My working theory is that there's isn't anything fundamentally wrong with this, to the extent that it's quite possible legal wouldn't fundamentally be totally against it. Rather, you have

<Company> officially open sources thing -> community promptly hefts thing onto their collective shoulders -> thing is now relevant -> thing's open-endedness combined with its relevance turns it into a serious contender for contemporary mindshare -> everyone starts looking at <company> and making noises about how the original release was technically official -> people start asking why <company> "has abandoned *thing*"

D:

If anyone could shoot down this idea and let me know it is actually some other type of legal ramification (that maybe I could get further details about in person someday) I would be very appreciative.

EDIT: This is now at 0. Okay... interested to hear more details. (Edit2 just before it locks: it's back to 1 at least heh)


My guess is that they'll open source more stuff. Nostalgia is a powerful tool, and Microsoft's reputation is worse than that of FAANG, in many circles. They know they need to work on that, my guess is that it's primarily a priority/budget thing.

Someone needs to work on these releases and the big ones can be scary, especially from a legal point of view. So they'll just slowly release one small item at a time, and hopefully over time most of them are out in the open. Maybe we even get some big fish, like Win98.


What a bizarre product to request though. You have to imagine that foone knew something about this being planned to be open sourced, or was somehow in on it. There's no way MS just go "sure why not?" when someone tweets out of the blue that they want source code to something.


The process was surprisingly close to "hey could you do this" followed by "that would be tricky, let me see what I can do" followed by "it's up on GitHub." Many kudos to Scott Hanselman and others at Microsoft for making this happen (and happen quickly) in response to foone's request.


Yes, that’s what happened. They’ve been asking for a while, and I’ve tried a couple of times. But their last tweet got pretty good pick up, so I made another run at it. I asked the right Vice Presidents, found the right people to agree to the new license, retrieve the source from our company archivist, found two developers who worked on the actual project, and then worked with the Microsoft open source office to finalize everything, then released it. It took just about a month. I like doing things that people don’t think are possible. It’s literally as simple as that.


> retrieve the source from our company archivist

Is this position hiring? :D

(New career goals: become high up enough in the company that I can decree that the open-source office and company archivist positions get merged into the same team.)


It is also the person that has to hand repair corrupted Visual Source Safe databases with a hex editor.


Huh I stand corrected. Good news, either way :)


I cannot tell you how many hours I spent playing with 3DMM… once I figured out that you could make a gun by using a 3D “L” shape, my cartoons got violent and weird and I absolutely loved it, and I kept playing with it well into my teenage years.

I should see if I still have my CDs for it somewhere so I can run it in a VM.


:D


3D movie maker was one my first computer experiences ever. The maskot and the intros, the movies and the scenes, some creepy looking uncanny cg backgrounds and music. Peak 90s stuff. Amazing product!


I feel like some of my best "gaming memories" are with things that aren't even entirely games at all, but instead creation tools tailored towards kids. I spent many hours with 3D Movie Maker, but also the Spiderman Cartoon Maker, Kidpix, the PS1 version of RPG Maker, and more recently Pico-8.

I think there's a viscerally satisfying feeling you get when making something that you just don't get when you're playing someone else's game. 3DMM might have had 1% of the budget of a modern Call of Duty game, but I'll remember the time I spent playing 3DMM substantially more.


Ditto with text adventures. You can write one with Inform6 and the Inform Beginners' Guide with ease (and now with inform7 too).

You don't need to know how to draw or sketch. Just write. Inform7 it's a logical clause based language, that's it, you just state conditions and the compiler makes your adventure. Inform6 it's object oriented but it's the easiest OOP language ever, you can create a 10x10 "room" size town in minutes and defining the objects can take less than an hour. The logic and debug, well, maybe some days/weeks, but far less than a graphical game.

And the most crazy stuff: after that, you'll game will run under DOS, Amiga, Android, GNU/Linux, BSDs, Windows, IOS, everywhere where a Z-Machine interpreter like frotz exists.

Opening your adventime game on a DOS/Amiga emulator and watching it running in the same way as your desktop PC it's astounding.

When I opened my prototype (In Spanish, thanks to INFSP6 for Inform6) under UAE with Amiga OS 3.1 and Frotz, and later, Frotz for DOS under DOSBox, the feeling was incredible.


I never played with Inform, though it looks pretty interesting. I tried out Twine a few years back and had a bit of fun with it.

That definitely falls within my framework though; I think people sometimes underestimate how much fun creation can be when it's straightforward. It makes me sad that Project Spark for the Xbox One never really took off, since I feel like it followed in the footsteps of RPG Maker for the PS1 or Fighter Maker for the PS2.


Inform 7 is a trip to explore. The natural language inspired programming language right now is still such a unique experience to write in. It makes writing a text adventure feel more like writing a novel to a kid that needs way too many details on how the world is structured.

I also was a bit saddened when Project Spark got shutdown. It had some great ideas, it just couldn't compete with Minecraft and Microsoft bought Minecraft. I keep wondering why more of Project Spark's good ideas don't show up in Minecraft at this point, though.


Inform it's more advanced than Twine for obvious reasons :).


If an updated version that easily runs on present day hardware gets released then the memes made with it could be glorious.


*Clicks the star and watch buttons on GitHub for you*

Seriously. It'll likely both take a little while, yet be usable much sooner than you expect.


The fact that the repo is archived indicates to me that this is just a code dump. You'll need to monitor the forks for people's attempts to resurrect it.


Follow/check @foone on twitter that’ll be the easiest way as they are responsible for this happening and will probably work really hard to get it running as fast as possible. They’ve been tweeting about this forever but all the sudden last month their tweet thread picked up steam and a month later it’s open source


I still have the original CDs for this. I booted it up the other week and much to my surprise it still ran quite well on Windows 10.


Microsoft doesn't always get it right, but they do try very hard to be backwards compatible for a very very long time.


It's true. Windows is essentially a museum of sorts


Dear God I need to find my CD of it and try. Good that I didn't skip the DVD drive in my second latest PC build ...


Not 100% sure of the legality of this now with the open-sourcing, but 3DMM is readily available on the internet archive:

https://archive.org/details/3dmoviemaker


Thanks will try did not find the CD the casing was empty ...


This is incredible! I used to spend days making movies with this when I was a kid. Short clips and random stuff just to get laughs from friends and family.

There is a community still going around this where people post there movies and stuff - https://www.3dmm.com/

Oh wow, good times!


Same! I used to run a site for highly rated 3DMM movies that begun becoming popular until someone flagged it for copyright infringement and Xoom terminated it without warning while I was on vacation, tried showing it to friends but it wouldn't load :( https://en.wikipedia.org/wiki/Xoom_(web_hosting)

I then attempted to make a technically complex movie myself, and of course my HDD crashed - seems as a kid I could never catch a break when it came to technology :P


Visual C++ 2.x can be downloaded here: https://winworldpc.com/product/visual-c/2x

hope they will every release the good old Basic Compiler bascom/basrun stuff


This is astounding. Truly above expectations. I hope it’s not the last such stunning open sourcing from Microsoft, as there is so much historical stuff I’d really like to explore, but I can’t complain.


Congrats, this type of release is really hard to do and usually gains you nothing apart from that impossible to measure reputation. Kudos to all involved.


Congrats, microsoft! I don't know how popular, useful or powerful this software is but, although not a microsft flagship, I love when companies open source their software that is no longer a huge profit margin. At the least it is better than see it dying deep into a drawer.


Absolutely. I also love it for exactly that reason. It is no longer a financial asset, it is no longer competetive advantage and not a unique jumpstart which can lead to one.


This game got me into creating stuff with computers, which lead to me becoming a developer later on. So many good memories from this.


Same here! I owe my livelihood to this game


So many childhood memories! What a shame I didn't archive my movies back then. Would be so much fun watching them 20 years later as an adult...


I'd like to take this opportunity to share that the Steamed Hams meme was recreated in 3D Movie Maker https://www.youtube.com/watch?v=YTbQHE4kVFk


Fun fact, you can blame this project for the public release of Comic Sans ;)


Wait is that true? I had always heard that was Microsoft Bob's gift to the world...though clearly I'm wrong according to Wikipedia [1].

> The infamous Comic Sans font also made its first appearance in 3D Movie Maker

[1] https://en.wikipedia.org/wiki/3D_Movie_Maker


You're both right :)

"Comic Sans was designed by Vincent Connare to fit the theme of Microsoft Bob and was inspired by comics like Watchmen and Batman. It would have fit Bob’s animated interface, but it wasn’t completed in time for the release. Given the short lifespan of Bob, we’d never learn to use Windows in Comic Sans. The font did get included in later Microsoft programs starting with Microsoft 3D Movie Maker."

From https://uxdesign.cc/the-ugly-history-of-comic-sans-bd5d07f8c...


Not to get too far off on a tangent, but I am going to take a slightly unpopular opinion: Comic Sans is amongst the most readable fonts. It sheds almost all notions of trying to be "pretty", and thus the letters are distinct and easy to read. I wouldn't submit a research paper in Comic Sans, but I do happily use it for nearly everything internal to my computer (e.g. my coding font, my Obsidian notebooks, slack, etc), and I'm not even dyslexic, which apparently it's considered really good for.

I feel like it's almost a shame that it has been deemed as "unprofessional", I think largely because of people using it in places that it doesn't really fit (e.g. the aforementioned research papers). I think there exists an alternate universe where people appreciate Comic Sans for the treasure that it is, and I suppose I can thank both MS Bob and 3DMM for it :).


I have heard a rumor that dyslexic folk have an easier time reading Comic Sans


That's what I have heard too.

I'm definitely not dyslexic, but I genuinely do just find Comic Sans to be easy to read. Serifs don't throw me off or anything and they look pretty, but they do kind of feel like noise when I'm actually just trying to read. Comic Sans just feels...legible? Like it was intended to be read-first, don't worry about impressing.


A couple NaNoWriMos back I saw the advice to switch to writing in Comic Sans to write faster. It seemed like silly advice, but it works surprisingly well and I think there are a couple keys to it: we know from the dyslexia studies that Comic Sans is often faster to read for many people (because people mostly read word shapes not letter-at-a-time and Comic Sans has great diversity in word shapes) and you do still need to read what you recently wrote to follow trains-of-thought, but also there seems to be a weird psychological effect in play when writing in Comic Sans for a first draft where with the font's overall lack of formality/casualness that you tend to give yourself more permission for mistakes and not editing while you write.

I did find a noticeable speed improvement in NaNoWriMo attempts when writing in Comic Sans. I find that fascinating.


Makes sense. I still remember how much eye glasses "g" and fancy "a" confused me while learning to read.


Agreed. I'm a huge fan of rounded fonts for legibility, which unfortunately isn't that popular. My current favorite is San Francisco Rounded.


Someone please build this for WebAssembly!


+1 for this. I would kill to be able to play this quasi-natively on my Mac...I would be perfectly happy with a browser version.


Great! Fingers crossed Adobe gets the hint and open sources Flash MX.


Is unreal engine the modern day equivalent of this? Is there something else that is more approachable?


I think this is more comparable to something like Garry's Mod than a bare game engine.


Source Filmmaker and Unity are pretty approachable too


SFM is definitely the closest modern equivalent that comes to mind.


SFM is a real tool. People have made production quality stuff on it:

https://youtu.be/PU1fu6ErBoA


Listen, I have a real soft-spot for SFM and the people who use it. Valve made some really quality stuff with it in their hayday, and every once in a while I'll see a Saxxy winner that really knocks my socks off. Splicing stock footage with SFM-rendered content is not "production quality" though, and generally speaking their render pipeline has aged the worst of all. TF2 and SFM looked great in 2008, the highly stylized graphics did a good job to separate it from the pack. Nowadays it just looks corny though. Maybe it's because I've just seen so much SFM by now, but I think the bigger issue is that the textures and texture filtering just looks bad these days. If SFM wants to remain relevant, it's going to need a lot of work put into SFM2 to get it ready for a new era of animation.


I mean S2FM is already usable, just look at TheParryGod's stuff on YT.


It looks good, but as you can probably tell, you're only able to import Half Life: Alyx assets as of right now. Without Workshop assets like SFM1 had, it's going to be really hard for it to reach the critical mass it once enjoyed.


Fair enough. I’m hoping one day they just make some simple mo cap devices that let you control an in-game model and one person just uses a in-game camera like a FPS. Many of these game engines look good enough if you can just get a few people together to act out scenes and do camera work.


I'm working on something that intersects and have a private beta. If anyone is interested in this space, feel free to reach out. Right now I need another developer in addition to myself but will need some growth/product/ops help in the near future. Raise forthcoming.


Godot is pretty much drag and drop AND open source.


Godot is cool, but it's still an order of magnitude more complicated to make something than 3DMM, at least in the little bit of Godot that I've played with.

You can be "productive" with 3DMM in a matter of minutes, vs the few hours it takes for Godot. Of course, Godot is infinitely more capable than 3DMM, so it's sort of apples and oranges.


"I'll trade a magic trick for a vase."


Hey hey! That was excellent! What would you like to do now?


hmmm i'll check to see if there's time before dinner


who are you and what have you done with Microsoft


They're still there.

This org chart is as accurate as it was in 2011:

https://www.cultofmac.com/102917/apple-ms-google-etc-imagine...

One group is moving forward and another one is putting ads in the OS...


I got it to build in my PCem windows 98 machine with visual c++ 2.0 installed on it ( among other unrelated things ) up until the utilglob.obj file. Just to make things easier I added the bin folder of the mscv20 folder to my autoexec's path. I also used visual Studio code on my host computer to replace every instance of RM -f with del and also removed every instance of CMD /c and 2<nul because none of that is needed with the old DOS prompt. Also I made the proper modifications to the setvars.bat file, which you have to run in the command prompt that you're going to be using for this entire process.


Anyone have any favourite videos made with this?


Lot's of ironic meme-style videos from 10+ years ago. (NSFW language)

- https://youtu.be/PGJKeESLBpQ

- https://youtu.be/X0pEBE_eYdU

- https://youtu.be/H8Nbd4yO1qw (this is a reup from 2009)

- https://youtu.be/VqqRRWIYiy8?t=16

Actually there is an entire playlist you can enjoy:

https://www.youtube.com/playlist?list=PL_7_xaiHt-pegi1SwC1tV...


Wow that first one is really surprising for it's proficient cinematic vocabulary. The shots are great juxtaposed against this very rudimentary 3D tool. Neat!


Yes that one impressed me the most.

A random aside. Note on the videos from 2006/7, you've got comments from the last year joking that the authors are time-travelers spreading Gen-Z humor to the past.



One example, taken from the forum posted in another comment:

https://youtu.be/JYEYP-iTNHc



Came here to post The Rat Movie and the Blue Shell incedent. This community hasn't disappointed me.


I still have the CD of the Nickelodeon 3D Movie Maker release, which was basically this software but all the characters, sets, and sound effects were from cartoons like Rocko's Modern Life, Ahh Real Monsters, Ren & Stimpy, etc. Wish I still had my save files.

https://archive.org/details/nick3dmoviemaker

https://nickelodeon.fandom.com/wiki/Nickelodeon_3D_Movie_Mak...


Oh my FSM! This brings some very, very early, and almost forgotten memories! Of fiddling with limited demo version of this, and it seeming so magical to my kid's eyes way back then... :)


This is a pleasant surprise. I absolutely lived on 3DMM and 3DMM.com (which is still going)

They already did great jobs with expanding the tools via RE but I wonder if anyone will start adding new features.


Now to set up GitHub Actions to build it... ;)


I'm waiting for the ports to Mac and Linux. I doubt it will happen but a guy can dream...


I wonder which OS this was originally developed on if it's 32-bit but pre-Win95. Maybe NT3 or Chicago betas?


Probably NT3.


If anyone is interested in working on a similar product, I've developed a product that is in private beta with users. Raise forthcoming. Developers are always needed but don't hesitate to reach out if you can help with growth or product.


I hope someone creates a modern build which can run on current Windows versions.


That's literally one of the goals of foone (https://twitter.com/foone), who has been asking Microsoft to release the source code for years.

Other goals, as far as I remember, are removing limitations on number of objects etc., the ability to export in a modern video format, and different resolutions.


It looks like if you have the CDs, it still runs on Windows 10.


Does anyone remember Pandora Box

https://www.youtube.com/watch?v=UMeidL4xZ2A


Seems like each source file has a comment saying:

Primary Author : *** Review Status: Reviewed

I wonder what was their process for version control and code review back in the day?


With the exception of a specific engineers who we checked with, our process includes masking developer aliases, names, etc. It does seem that version control was a lot more basic back then. We're seeing comments on Twitter that it's likely this was under the SLM system, which includes holding an exclusive lock on specific files. (Some history here: https://devblogs.microsoft.com/oldnewthing/20180122-00/?p=97...)


Ahh the big old 9000 line long files way of developing code.

There are surprising few files but those that are here are massive.


I recall thinking this was absolutely amazing on my uncle’s ¿486? Now I’m not so sure it holds up :)


someone gets it build and running

https://www.youtube.com/watch?v=gqXTzlDZmhU

see comments on youtube


are there any files in the source release that can also be found the in the CD release - are they 100% equal or do we maybe got sources of a pre 1.0 or post 1.0 version?


Can someone explain to me why Microsoft file and directory names are always so hideous?

Like, what in the world is cd12, cd2, cd3, cd9? Somehow it gets worse when you click into them.


That’s not related to Microsoft the company. That’s a result of the archivist systems that I had to mess with to get this stuff restored from tapes and CDs. I didn’t feel like changing it too much. So blame me.


My comment definitely came across as rude and I apologize. Thanks for your work and for taking the time to provide a thoughtful answer :) I'd probably also have left it just as you did.


:)


I prefer it this way! It's better to share the "rawest" backups and fix them up after they've been dumped into version control so that a history of changes is available.


Thanks for your effort. Having a piece of my childhood made open source makes me so happy.


Consider yourself blamed Scott! jk


Being near 30 years old a lot of the naming looks hideous because it was developed with 8.3 filenames. https://docs.microsoft.com/en-us/openspecs/windows_protocols...


The migration away from having numbers in identifiers is a subtle but extremely important aspect of modern code quality I feel.

Having worked on code thats at least 20 years older than me, by the far the hardest parts to work out are the func1, func2, func3.


Starglider was brilliant =)


Old-school MS code! :)


memories


> The code does not build with today's engineering tools, and is released as-is.

Unfortunate given Microsoft's track record of maintaining backwards compatibility


Microsoft has an amazing track record of binary compatibility, not source. Although that does imply that you should be able to run an old compiler and make it work.


I feel like it's close... but it's pretty hard to take engineers off of modern day projects to go figure out what's needed to make an almost 30-year old set of source code to build. I imagine some folks will get this building soon that are smarter than we are!


As nice as it is to see some of the old source code released for nostalgia reasons (the old winfile also comes to mind) it's been even nicer to see some of the modern Microsoft apps make their way out as well. I've been able to directly interact with e.g. Windows Terminal development in ways traditional closed development would have just left me frustrated and disappointed.


Happy to learn more, ping me in some form. The Windows Terminal is building a community on GitHub and it's pretty active.


It's not surprising considering how many major versions of Windows have since passed. XP and 7 each broke large swaths of software, for mostly good reasons, despite all sorts of work to maintain compatibility.

The important thing is it's a starting point. foone specifically indicated a desire to update and extend the software, which means at minimum, a fork that runs on modern platforms.


Isn't this a really good reason for making it public? Microsoft obviously has no reason (i.e. ROI) to spend days trying to make this build.


bruh you can't build Apple software that was written like 2 years ago what are you talking about




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: