争论原文
Linus vs. Tanenbaum 译文(转)
前两天看到徐宥的博客上提到了 “Linus 和 Tanenbaum 吵了一场著名的架”,很好奇,就在 google 上找到了这篇文章。原文链接在 我准备翻译一下。但它实在是太长了,只能做个长期打算。每周都会翻译一点的。 (以下是正文)
这是一份 Andy Tanenbaum 和 Linux Benedict Torvalds 关于内核设计,自由软件和其他讨论的摘录。里面只包含了一些主要的东西。
Linux 过时了
Andy Tanenbaum
From: ast@cs.vu.nl (Andy Tanenbaum)Newsgroups: comp.os.minixSubject: LINUX is obsoleteDate: 29 Jan 92 12:12:50 GMTOrganization: Fac. Wiskunde &h; Informatica, Vrije Universiteit, Amsterdam
我到美国才有几周,所以我还没过多评论 LINUX(并不是说我不常到这里),但现在我不妨在这里评论两句。
你们大多数都知道,MINIX 是我的业余爱好。它是我在晚上写书写累了,电视上却没有 CNN 来报道战争、起义或参议院的时候捣鼓的东西。我的工作是一名教授、研究员,研究的就是操作系统。
因为我的工作领域,我认为我知道一些关于操作系统未来 10 年发展趋势或着类似的东西。有两方面特别明显:
微内核 vs 宏内核系统(Microkernel vs Monolithic System)
大多数老的系统是宏内核系统,即整个系统是一个单一运行在“内核模式”下的 a.out 文件[二进制,lijunsong注]。这个二进制文件包含了进程管理,内存分配,文件系统和其他。比如 UNIX, MS-DOS, VMS, MVS, OS/360, MULTICS 等等都是这种系统。
另一种是基于微内核的系统,大多数这样的系统是由运行在内核外部的分离的进程构成。进程间的通信靠信息的传递。内核的任务就是传递信息、终止操作、管理低级程序甚至 I/O 。这种设计的例子是 RC4000, Amoeba, Chorus, Mach, 和还没有发布的 Windows/NT。
虽然我在这里可以深入历史,说说这两种设计相关的特点,但可以说,微内核和宏内核的争论在那些设计操作系统的人群中已经结束。微内核胜了。宏内核唯一可称道的地方就是性能,但现在有足够的证据表明微内核系统也能够和宏内核系统跑得一样快(比如,Rick Rashid 发表的对 Mach 3.0 和宏内核系统对比的论文),所以现在除了吼叫之外什么都结束了。
MINIX 是一个基于微内核的系统。文件系统和内存管理是分离的进程,运行于内核之外。I/O 驱动也是分离的进程(它运行在内核中,但这是因为 Intel CPU 的缺陷而不得不如此)。 而 LINUX 是一个宏内核的系统。这真是巨大的跨越,可惜是跨到了 70 年代。这就像把一个正在运行着的 C 程序用 BASIC 重写了。对我来说,在 1991 年还写了一个宏内核的系统真是一个愚蠢的选择。
可移植性
曾经存在过 4004 CPU,然后它发展成 8008,后来它又经过塑形手术变成了 8080,又发展成 8086, 又发展成了 8088,又发展成了 80286,又发展成了 80386,又发展成了 80486,然后继续,直到了第 N 代。与此同时, RISC 芯片产生了,他们有的跑起来频率能超过 100 MIPS。能超过 200 MIPS 的出现将会是近几年的事情。这些事情不会突然消失,而将要发生的,是这些东西会从 80x86 起步继而逐步发展。它们将会通过模拟 80386 来运行老的 MS-DOS 命令。(我曾经在我自己 IBM PC 上用 C 写过模拟程序。这个模拟程序你可以通过 FTP 从 ftp.cs.vu.nl = 192.31.231.42 在 minix/simulator 目录中获得)我认为为任何一个特定的体系结构设计一个操作系统实在是个错误,因为它不会长久。
MINIX 设计得很便捷,已经发展到了 680x0(Atari, Amiga, Macintosh), SPARC, 和 NS32016。LINUX 被紧紧邦在了 80x86 上面,走不远的。
不要误解我,我对 LINUX 没有恶意。它会得到那些想从 MINIX 转向 BSD UNIX 的人。但坦诚地说,我建议那些想得到先进又免费(原文是 MODERN "free", lijunsong注 )操作系统的人,好好找一个基于微内核的,可移植的操作系统,比如像 GNU 。
Andy Tanenbaum (ast@cs.vn.nl)
P.S. 画外音: Amoeba 有一个 UNIX 仿真器(在使用者的空间里工作),但它离完成尚早。如果有人想在上面工作,请通知我。要运行 Amoeba,你需要几个 386 CPU,其中一个需要 16M,所有的 CPU 都需要 WD 以太网卡。
----
Linus Benedict Torvalds
From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)Newsgroups: comp.os.minixSubject: Re: LINUX is obsoleteDate: 29 Jan 92 23:14:26 GMTOrganization: University of Helsinki
好吧,既然谈到了这个话题,我想我不得不作出回应。无论如何,我先在这里对听够了 linux 的 minix 用户道个歉。我本来应该“忽略那些恶心的话”,但……是时候回击了。
In article <12595@star.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) 写道:
我到美国才有几周,所以我还没过多评论 LINUX(我不愿说得过多),但现在我不妨在这里评论两句。你们大多数都知道,MINIX 是我的业余爱好。它是我在晚上写书写累了,电视上却没有 CNN 来报道战争、起义或参议院的时候捣鼓的东西。我的工作是一名教授、研究员,研究的就是操作系统。你就用这个来掩饰 minix 有限性?很抱歉,你说得太随意了:我有比你多的理由,linux 几乎在所有的领域里面都能战胜 minix,还没有提大多数为 minix PC 的优秀代码都是 Bruce Evans[Bruce Evans 是 minix 操作系统32 位版本的主要编制者,他与 Linux 的创始人Linus Torvalds 是很好的朋友。lijunsong 注] 写的。
Re 1:你把 minix 看作一个爱好 – 看看那些用 minix 赚钱的,和那些创造 linux 但免费分发的人,然后再说爱好。minix 也免费的话,我的一些抱怨会稍有缓解。Linux 对我才真正是个爱好(严格说来是最好的方式):我没有因它而赚到钱,并且它不关乎我在学校里学习的事情。我用我自己的时间,在我自己的机器上完成了 linux。
Re 2:你是一名教授和研究员:这恐怕就是 minix 上一些致命缺陷的最好理由。我只能希望(和假设)Amoeba 不会狗屎得和 minix 一样。
1. 微内核 vs 宏内核系统
的确,linux 是宏内核,并且我承认微内核要更好些。对于一些无需争论的话题,我勉强同意你说的。从理论的(和审美的)角度,linux 并不严谨。如果 GNU 内核去年已经做好,我也就不会如此麻烦地做我的项目:事实是,GNU 内核到现在都没有做出来。就可用性来说,Linux 打了场胜仗。
MINIX 是一个基于微内核的系统,LINUX 是一个宏内核系统。
如果这是唯一衡量内核“优秀”的尺度,你是对的。但你没有提到的是,minix 并没有把微内核的能力发挥出来,并且对于真正的多任务(在内核中的)处理还有问题。如果我做了个在多线程文件系统有问题的 OS,我就不会忙着去批评其他的 OS,而会尽最大努力弥补以前出过的丑。 [是的,我知道 minix 有很多多线程处理的技巧,但只不过是技巧而已,并且 Bruce Evans 告诉我有很多的版本]
2. 可移植性
“可移植性的存在是为了那些不会编程的人”
–我,此时此刻(傲慢的神态)事实是 linux 移植性比 minix 好。“怎么可能!”我听到你在问我。它就是真的 - 但不是 ast[即 Andy Tanenbaum, 这里 Linus 直呼其名了, lijunsong注] 的那个意思: 我根据我了解的(如何来做)做出来的 linux (当时没有 POSIX 标准放在我面前) 是和标准一致的。把东西导入到 linux 中普遍而言比导入到 minix 中简单得多。
我同意可移植性是个好东西:但只在有意义的前提下。尝试着把一个操作系统变得可移植没有必要:附带一个轻便的 API 就足够了。一个操作系统的理念是利用硬件的特点,并且把他们隐藏在一层层的高级调用中。这就是 linux 做的:它比其他的内核利用了更多的 386 特点。当然这也使得内核变得不大有移植性,但它也变得更容易设计。一个值得做的交易!也是一个让 linux 成为了第一的交易。
我也同意 linux 变得越来越不可移植:我去年一月得到了我的 386, linux 项目教会了我关于它的部分东西。如果它是一个真正的项目的话,更多的东西将会更轻便地完成。但我没有过分地(为没有这样做)找借口:它是一个关于设计的决定,并且去年 4 月当我开始这个项目的时候,我不认为有人会想用它。我那时候很乐意告诉大家我错了,并且因为我的代码可以免费得到,即使想要移植它不会太容易,但人们能够自由地去尝试。
Linus
PS. 我为我有时说得过于刺耳而抱歉:minix 在你什么也没有的时候的确很好。如果你有 5-10 个空的 386,Amoeba 可能是个好的选择,但我明显没有。我不容易生气,但对 linux 我有些过敏 :)
Andy Tanenbaum
From: ast@cs.vu.nl (Andy Tanenbaum)Newsgroups: comp.os.minixSubject: Re: LINUX is obsoleteDate: 30 Jan 92 13:44:34 GMTOrganization: Fac. Wiskunde &h; Informatica, Vrije Universiteit, Amsterdam
Linus Benedict Trovalds 写道:
你就用这个[作为一名教授]掩饰 minix 的有限性?
MINIX 的有限性不说全部,至少也有部分和我当一名教授有关系:一个很明显的设计目标就是让它能在便宜的硬件上跑,让学生能够付得起价格。特别是它已经在一台平均频率 4.77 MHZ 的无硬盘电脑上跑了很多年。你可以在这做任何事情,包括修改和重新编译系统。需要明显指出,大概一年以前,有两个版本,一个是为 PC(360K) 设计的而另一个是 286/386(1.2M) 设计的。PC 版本比 286/386 版本多销售1-2倍。我手边没有数据,但我猜测现在的六千万台电脑中比起 8088/286/680x0 来说 386/486 占了很小一部分。在学生中它更少了。对于那些花了很多钱,买了一级设备的人来说,做免费软件是个很可笑的概念。当然,5 年以后会有不同,但 5 年之后每个人都会在他们频率只有 200 MIPS,64M SPARCstation-5 电脑上使用免费的 GNU 系统。
Re 2:你是一名教授和研究员:这恐怕就是 minix 上一些致命缺陷的最好理由。我只能希望(和假设)Amoeba 不会狗屎得和 minix 一样。
Amoeba 不是为了在一台没有硬盘的 8088 机器设计的。
如果这是唯一衡量内核“优秀”的尺度,你是对的。但你没有提到的是,minix 并没有把微内核的能力发挥出来,并且对于真正的多任务(在内核中的)处理还有问题。如果我做了个有多线程文件系统问题的 OS,我就不会忙着去批评其他的 OS,而会尽最大努力弥补以前出过的丑。
一个多线程的文件系统只是一个性能上的改造。当只有一个任务在运行的时候 – 一台 PC 上通常的情况 – 它就没什么作用并且还增加了代码的复杂性。在那些快得足够支持多用户的机器上,你也许会有足够的缓存区来保证缓存命中率,这种情况下,多线程也没什么作用。它只是在多个程序在真正做硬盘读写的时候才有优势。是否值得为此把系统做的更复杂就无需争论了。
我仍然坚持在 1991 年设计了宏内核是个错误这样的观点。很庆幸你不是我的学生。要不然你这个设计不会得高分的 :-)
事实是 linux 移植性比 minix 好。“怎么可能!”我听到你在问我。它就是真的 - 但不是 ast 的那个意思: 我根据我了解的(如何来做)做出来的 linux (当时没有 POSIX 标准放在我面前) 是和标准一致的。
把东西导入到 linux 中普遍而言比导入到 minix 中简单得多。这个新闻组中的人都知道,MINIX 是在 POSIX 之后设计的,并且在逐渐 POSIX 化。每个人都同意有“用户级别标准”是一个很好的主意。另外,恭喜你能够在没有 POSIX 标准的情况下写出一个与 POSIX 一致的系统。我发现即使学习标准之后它仍然是很难的。
我的观点是,写一个和专门的硬件,特别是奇怪得像 Intel 产品线那样的硬件紧密联合的操作系统,基本上是错误的。一个 OS 本身应该能简单地移植到硬件平台上。当为 IBM 360 设计的 OS/360 在 25 年前用汇编写好时,它们应该能够运行。当专为 8086 的 MS-DOS 在 10 年前写好时,就没那么成功,IBM 和 Microsoft 现在终于痛苦地意识到了。在 1991 年为 386 写一个新的 OS 让你得到学期的第二个 'F'。但如果你在期末考试表现得很好,你仍然可以通过考试。 Prof. Andrew S. Tanenbaum (ast@cs.vu.nl)
----
Linus Benedict Torvalds
From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)Newsgroups: comp.os.minixSubject: Re: LINUX is obsoleteDate: 31 Jan 92 10:33:23 GMTOrganization: University of Helsinki
Andy Tanenbaum 写道:
MINIX 的有限性不说全部,至少也有部分和我当一名教授有关系:一个很明显的设计目标就是让它能在便宜的硬件上跑,让学生能够付得起价格。
是,它是一个真正技术上的目标,也使我的一些说法变得无可辩解。但同时,你也搬石头砸了自己的脚:现在你承认一些 minix 的错误是有很大的移植性–包括了那些不是为 unix 设计的机器。这个 假定 也导致了 minix 不能方便地扩展来执行分页机制,甚至是在那些支持它的机器上。是的, minix 是可移植的,但你把它改为“不能用上任何特性”依然是正确的。
一个多线程的文件系统只是一个性能上的改造。
不正确。它是微内核性能上的改造,但它在你写宏内核时就有的一个特性 – 一方面是微内核工作得不太好(正如我在给 ast 的个人邮件里提到的那样)。用过时的方法来写 unix,你就会得到一个多线程的内核:每个程序只做它自己的事情,你不需要做一些像信息队列那样讨厌的东西来使得它变得高效。
除此之外,有人认为“只是性能上的改造”是重要的:除非你有一个 cray-3,我猜每个人都厌倦了等待电脑做出反应。我知道我用 minix 的时候就是这样(不可否认,我在 linux 上也是,但它好多了)。
我也同意在 1991 年设计了宏内核是个错误这样的观点。很庆幸你不是我的学生。要不然你这个设计不会得高分的 :-)
不是你的学生我大概也得不到高分:我和这个学校的一个教 OS 设计的老师有过争吵(完完全全和 OS 无关的争吵)。我怀疑我什么时候才学得会 :)
我的观点是,写一个和专门的硬件,特别是奇怪得像 Intel 生产线那样的硬件紧密联合的操作系统,基本上是错误的。
但 我 的观点是,操作系统 不 该受限于任何进程: UNIX 跑在大多数真实存在的处理器上。的确,这种实现是有特殊硬件的关系,但是有很大的不同。 你把 OS/360 和 MS-DOS 看作不好的设计是因为他们是决定于硬件的,我也同意。但,这和 linux 有个很大的差别: linux API 是可移植的(不是因为我的聪明,而是我借鉴了相当成熟的 OS: unix)
如果你现在为 linux 写程序,在 21 世纪,你不应该感到吃惊的事情是只需要为 Hurd(GNU 自己的内核,当年就是因为 Hurd 没有做出来 Linux 内核才发展壮大的, lijunsong 注) 重新编译它们就可以(在 Hurd 上运行)了。正如同被提过的那样(不只是我),linux 内核只是一个完整系统中的一小部分:现在运行的 linux 源码压缩后大概有 200k – 而一个完整的开发系统源码压缩后至少有 10M(并且很容易就变得大得多)。所有的这些源码都是可移植的,除了这个你如果没有任何先前知识就需要至少一年时间来重写(或许,我之前做过)源代码的小小内核。
实际上, 整个 linux 内核比起在 mach 中依靠386的要小得多:现有的 mach 版本 i386.tar.Z 压缩后超过了 800kB(nic.funet.fi 上显示是 823391 字节)。不可否认,mach 要大得多并且有更多的特性,但这也预示了其他的一些东西。
Linus
Linus Benedict Torvalds
From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)Newsgroups: comp.os.minixSubject: Apologies (was Re: LINUX is obsolete)Date: 30 Jan 92 15:38:16 GMTOrganization: University of Helsinki
我曾写过:
好吧,既然谈到了这个话题,我想我不得不作出回应。
我回应了,并且完全抛弃也没有想到要有良好的网络礼节。对 ast 致歉,也感谢 John Nall 友好的一句“那不是它运作方式”[因为这里只有些主要的观点(见文章前面的说明),所以这里就没有那些打酱油的人说的话]。我当时有点过激,现在正在给 ast 写一封私人信(言辞不尖锐的)。希望人们不会因为 linux 过时而转身离开(我仍然认为不是这么回事儿,虽然那些批评家言辞确凿)– 一个急性子如是说 :-)
Linus “我第一次,希望也是最后一次回击” Torvalds
----
之后的部分貌似就是大家激烈辩论的部分,被这篇网页的制作者给拿出来了。
看这一群 hacker 的激烈辩论,我也会不知不觉沉迷其中。这种感觉和当初看生活大爆炸,却看到那群物理学的博士在生活中提到薛定谔的猫的感觉有点像……:-)只有这个时候,我才感觉到科学知识和我们的生活联系得如此紧密。
不愉快的两派阵营
Andy Tanenbaum
From: ast@cs.vu.nl (Andy Tanenbaum)Newsgroups: comp.os.minixSubject: Unhappy campersDate: 3 Feb 92 22:46:40 GMTOrganization: Fac. Wiskunde &h; Informatica, Vrije Universiteit, Amsterdam
我最近收到了几封从不愉快的阵营中发来的信。(实际上在 43,000 个读者中的 10 封信可能微有点多,但这不真实。)看上去大概有三个坚挺的立场:
- 宏内核和微内核一样好
- 可移植性不是那么重要
- 软件应该免费使用
如果你们想要来真正一次微内核和宏内核的讨论,没问题。我们可以在 comp.os.research 上做那个。但如果你不知道你自己在说什么,请闭嘴。我帮助设计和完成过三个操作系统,一个宏内核和两个微内核,并且对其他也有细致地研究。很多这里叫嚷的是都不是初学者了(e.g. 微内核不好是因为你不能在用户空间中用 paging – Mach 可以用是个例外)。
如果你不知道更多关于微内核和宏内核的对比,在一篇我和 Fred Douglis, Frans Kaashoek 和 John Ousterhout 在 1991 年 11 月 为 USENIX 杂志的 COMPUTING SYSTEMS 问题合作写的论文里面有一些有用的信息。如果你没有那份杂志,你可以从 ftp.cs.vu.nl(192.31.231.42) 上得到。在 amoeba/papers 目录中的 comp_sys.tex.Z (TeX 压缩文件) or comp_sys.ps.Z (PostScript 压缩文件)。这篇论文给出了可行的性能比较尺度,也证明了 Rick Rashid 的结论–微内核系统和宏内核系统的效率是一样的。
提到可移植性,不可能再有任何严肃的讨论了。 UNIX 已经移植到了包括 PC 到 Cray 上。写一个可以移植的 OS 并不比写一个不可移植的难,并且现在所有的系统应该有意识地被写成可移植的。当然 Linus 的 OS 教授指出了这一点。OS 代码的可移植性不是我在 1987 年发明的。
尽管大部分人能理性地讨论内核设计于它的可移植性, ---- (待续) 查 as an aside cray-3
Date: 2010-09-19
HTML generated by org-mode 7.4 in emacs 23
Linus vs. Tanenbaum
This is an extract of the discussion between Andy Tanenbaum and Linus Benedict Torvalds about kernel design, free software, and more. Only contributions from the main actors are included. The complete archive is also available, but only in BABYL format. You can use Emacs RMAIL to read it conveniently.
Linux is obsolete
Andy Tanenbaum
From: ast@cs.vu.nl (Andy Tanenbaum)Newsgroups: comp.os.minixSubject: LINUX is obsoleteDate: 29 Jan 92 12:12:50 GMTOrganization: Fac. Wiskunde &h; Informatica, Vrije Universiteit, Amsterdam
I was in the U.S. for a couple of weeks, so I haven't commented much on LINUX (not that I would have said much had I been around), but for what it is worth, I have a couple of comments now.
As most of you know, for me MINIX is a hobby, something that I do in the evening when I get bored writing books and there are no major wars, revolutions, or senate hearings being televised live on CNN. My real job is a professor and researcher in the area of operating systems.
As a result of my occupation, I think I know a bit about where operating are going in the next decade or so. Two aspects stand out:
Microkernel vs Monolithic System
Most older operating systems are monolithic, that is, the whole operating system is a single a.out file that runs in 'kernel mode.' This binary contains the process management, memory management, file system and the rest. Examples of such systems are UNIX, MS-DOS, VMS, MVS, OS/360, MULTICS, and many more.
The alternative is a microkernel-based system, in which most of the OS runs as separate processes, mostly outside the kernel. They communicate by message passing. The kernel's job is to handle the message passing, interrupt handling, low-level process management, and possibly the I/O. Examples of this design are the RC4000, Amoeba, Chorus, Mach, and the not-yet-released Windows/NT.
While I could go into a long story here about the relative merits of the two designs, suffice it to say that among the people who actually design operating systems, the debate is essentially over. Microkernels have won. The only real argument for monolithic systems was performance, and there is now enough evidence showing that microkernel systems can be just as fast as monolithic systems (e.g., Rick Rashid has published papers comparing Mach 3.0 to monolithic systems) that it is now all over but the shoutin`.
MINIX is a microkernel-based system. The file system and memory management are separate processes, running outside the kernel. The I/O drivers are also separate processes (in the kernel, but only because the brain-dead nature of the Intel CPUs makes that difficult to do otherwise). LINUX is a monolithic style system. This is a giant step back into the 1970s. That is like taking an existing, working C program and rewriting it in BASIC. To me, writing a monolithic system in 1991 is a truly poor idea.
Portability
Once upon a time there was the 4004 CPU. When it grew up it became an 8008. Then it underwent plastic surgery and became the 8080. It begat the 8086, which begat the 8088, which begat the 80286, which begat the 80386, which begat the 80486, and so on unto the N-th generation. In the meantime, RISC chips happened, and some of them are running at over 100 MIPS. Speeds of 200 MIPS and more are likely in the coming years. These things are not going to suddenly vanish. What is going to happen is that they will gradually take over from the 80x86 line. They will run old MS-DOS programs by interpreting the 80386 in software. (I even wrote my own IBM PC simulator in C, which you can get by FTP from ftp.cs.vu.nl = 192.31.231.42 in dir minix/simulator.) I think it is a gross error to design an OS for any specific architecture, since that is not going to be around all that long.
MINIX was designed to be reasonably portable, and has been ported from the Intel line to the 680x0 (Atari, Amiga, Macintosh), SPARC, and NS32016. LINUX is tied fairly closely to the 80x86. Not the way to go.
Don`t get me wrong, I am not unhappy with LINUX. It will get all the people who want to turn MINIX in BSD UNIX off my back. But in all honesty, I would suggest that people who want a **MODERN** "free" OS look around for a microkernel-based, portable OS, like maybe GNU or something like that.
Andy Tanenbaum (ast@cs.vu.nl)
P.S. Just as a random aside, Amoeba has a UNIX emulator (running in user space), but it is far from complete. If there are any people who would like to work on that, please let me know. To run Amoeba you need a few 386s, one of which needs 16M, and all of which need the WD Ethernet card.
Linus Benedict Torvalds
From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)Newsgroups: comp.os.minixSubject: Re: LINUX is obsoleteDate: 29 Jan 92 23:14:26 GMTOrganization: University of Helsinki
Well, with a subject like this, I'm afraid I'll have to reply. Apologies to minix-users who have heard enough about linux anyway. I'd like to be able to just "ignore the bait", but ... Time for some serious flamefesting!
In article <12595@star.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) writes:
I was in the U.S. for a couple of weeks, so I haven't commented much on LINUX (not that I would have said much had I been around), but for what it is worth, I have a couple of comments now.As most of you know, for me MINIX is a hobby, something that I do in the evening when I get bored writing books and there are no major wars, revolutions, or senate hearings being televised live on CNN. My real job is a professor and researcher in the area of operating systems.
You use this as an excuse for the limitations of minix? Sorry, but you loose: I've got more excuses than you have, and linux still beats the pants of minix in almost all areas. Not to mention the fact that most of the good code for PC minix seems to have been written by Bruce Evans.
Re 1: you doing minix as a hobby - look at who makes money off minix, and who gives linux out for free. Then talk about hobbies. Make minix freely available, and one of my biggest gripes with it will disappear. Linux has very much been a hobby (but a serious one: the best type) for me: I get no money for it, and it's not even part of any of my studies in the university. I've done it all on my own time, and on my own machine.
Re 2: your job is being a professor and researcher: That's one hell of a good excuse for some of the brain-damages of minix. I can only hope (and assume) that Amoeba doesn't suck like minix does.
1. MICROKERNEL VS MONOLITHIC SYSTEM
True, linux is monolithic, and I agree that microkernels are nicer. With a less argumentative subject, I'd probably have agreed with most of what you said. From a theoretical (and aesthetical) standpoint linux looses. If the GNU kernel had been ready last spring, I'd not have bothered to even start my project: the fact is that it wasn't and still isn't. Linux wins heavily on points of being available now.
MINIX is a microkernel-based system. [deleted, but not so that you miss the point ] LINUX is a monolithic style system.
If this was the only criterion for the "goodness" of a kernel, you'd be right. What you don't mention is that minix doesn't do the micro-kernel thing very well, and has problems with real multitasking (in the kernel). If I had made an OS that had problems with a multithreading filesystem, I wouldn't be so fast to condemn others: in fact, I'd do my damndest to make others forget about the fiasco.
[ yes, I know there are multithreading hacks for minix, but they are hacks, and bruce evans tells me there are lots of race conditions ]
2. PORTABILITY
"Portability is for people who cannot write new programs" -me, right now (with tongue in cheek)
The fact is that linux is more portable than minix. What? I hear you say. It's true - but not in the sense that ast means: I made linux as conformant to standards as I knew how (without having any POSIX standard in front of me). Porting things to linux is generally /much/ easier than porting them to minix.
I agree that portability is a good thing: but only where it actually has some meaning. There is no idea in trying to make an operating system overly portable: adhering to a portable API is good enough. The very /idea/ of an operating system is to use the hardware features, and hide them behind a layer of high-level calls. That is exactly what linux does: it just uses a bigger subset of the 386 features than other kernels seem to do. Of course this makes the kernel proper unportable, but it also makes for a /much/ simpler design. An acceptable trade-off, and one that made linux possible in the first place.
I also agree that linux takes the non-portability to an extreme: I got my 386 last January, and linux was partly a project to teach me about it. Many things should have been done more portably if it would have been a real project. I'm not making overly many excuses about it though: it was a design decision, and last april when I started the thing, I didn't think anybody would actually want to use it. I'm happy to report I was wrong, and as my source is freely available, anybody is free to try to port it, even though it won't be easy.
Linus
PS. I apologise for sometimes sounding too harsh: minix is nice enough if you have nothing else. Amoeba might be nice if you have 5-10 spare 386's lying around, but I certainly don't. I don't usually get into flames, but I'm touchy when it comes to linux :)
Andy Tanenbaum
From: ast@cs.vu.nl (Andy Tanenbaum)Newsgroups: comp.os.minixSubject: Re: LINUX is obsoleteDate: 30 Jan 92 13:44:34 GMTOrganization: Fac. Wiskunde &h; Informatica, Vrije Universiteit, Amsterdam
In article <1992Jan29.231426.20469@klaava.Helsinki.FI> torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds) writes:
You use this [being a professor] as an excuse for the limitations of minix?
The limitations of MINIX relate at least partly to my being a professor: An explicit design goal was to make it run on cheap hardware so students could afford it. In particular, for years it ran on a regular 4.77 MHZ PC with no hard disk. You could do everything here including modify and recompile the system. Just for the record, as of about 1 year ago, there were two versions, one for the PC (360K diskettes) and one for the 286/386 (1.2M). The PC version was outselling the 286/386 version by 2 to 1. I don't have figures, but my guess is that the fraction of the 60 million existing PCs that are 386/486 machines as opposed to 8088/286/680x0 etc is small. Among students it is even smaller. Making software free, but only for folks with enough money to buy first class hardware is an interesting concept. Of course 5 years from now that will be different, but 5 years from now everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5.
Re 2: your job is being a professor and researcher: That's one hell of a good excuse for some of the brain-damages of minix. I can only hope (and assume) that Amoeba doesn't suck like minix does.
Amoeba was not designed to run on an 8088 with no hard disk.
If this was the only criterion for the "goodness" of a kernel, you'd be right. What you don't mention is that minix doesn't do the micro-kernel thing very well, and has problems with real multitasking (in the kernel). If I had made an OS that had problems with a multithreading filesystem, I wouldn't be so fast to condemn others: in fact, I'd do my damndest to make others forget about the fiasco.
A multithreaded file system is only a performance hack. When there is only one job active, the normal case on a small PC, it buys you nothing and adds complexity to the code. On machines fast enough to support multiple users, you probably have enough buffer cache to insure a hit cache hit rate, in which case multithreading also buys you nothing. It is only a win when there are multiple processes actually doing real disk I/O. Whether it is worth making the system more complicated for this case is at least debatable.
I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design :-)
The fact is that linux is more portable than minix. What? I hear you say. It's true - but not in the sense that ast means: I made linux as conformant to standards as I knew how (without having any POSIX standard in front of me). Porting things to linux is generally /much/ easier than porting them to minix.
MINIX was designed before POSIX, and is now being (slowly) POSIXized as everyone who follows this newsgroup knows. Everyone agrees that user-level standards are a good idea. As an aside, I congratulate you for being able to write a POSIX-conformant system without having the POSIX standard in front of you. I find it difficult enough after studying the standard at great length.
My point is that writing a new operating system that is closely tied to any particular piece of hardware, especially a weird one like the Intel line, is basically wrong. An OS itself should be easily portable to new hardware platforms. When OS/360 was written in assembler for the IBM 360 25 years ago, they probably could be excused. When MS-DOS was written specifically for the 8088 ten years ago, this was less than brilliant, as IBM and Microsoft now only too painfully realize. Writing a new OS only for the 386 in 1991 gets you your second 'F' for this term. But if you do real well on the final exam, you can still pass the course.
Prof. Andrew S. Tanenbaum (ast@cs.vu.nl)
Linus Benedict Torvalds
From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)Newsgroups: comp.os.minixSubject: Re: LINUX is obsoleteDate: 31 Jan 92 10:33:23 GMTOrganization: University of Helsinki
In article <12615@star.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) writes:
The limitations of MINIX relate at least partly to my being a professor: An explicit design goal was to make it run on cheap hardware so students could afford it.
All right: a real technical point, and one that made some of my comments inexcusable. But at the same time you shoot yourself in the foot a bit: now you admit that some of the errors of minix were that it was too portable: including machines that weren't really designed to run unix. That assumption lead to the fact that minix now cannot easily be extended to have things like paging, even for machines that would support it. Yes, minix is portable, but you can rewrite that as "doesn't use any features", and still be right.
A multithreaded file system is only a performance hack.
Not true. It's a performance hack /on a microkernel/, but it's an automatic feature when you write a monolithic kernel - one area where microkernels don't work too well (as I pointed out in my personal mail to ast). When writing a unix the "obsolete" way, you automatically get a multithreaded kernel: every process does it's own job, and you don't have to make ugly things like message queues to make it work efficiently.
Besides, there are people who would consider "only a performance hack" vital: unless you have a cray-3, I'd guess everybody gets tired of waiting on the computer all the time. I know I did with minix (and yes, I do with linux too, but it's /much/ better).
I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design :-)
Well, I probably won't get too good grades even without you: I had an argument (completely unrelated - not even pertaining to OS's) with the person here at the university that teaches OS design. I wonder when I'll learn :)
My point is that writing a new operating system that is closely tied to any particular piece of hardware, especially a weird one like the Intel line, is basically wrong.
But /my/ point is that the operating system /isn't/ tied to any processor line: UNIX runs on most real processors in existence. Yes, the /implementation/ is hardware-specific, but there's a HUGE difference. You mention OS/360 and MS-DOG as examples of bad designs as they were hardware-dependent, and I agree. But there's a big difference between these and linux: linux API is portable (not due to my clever design, but due to the fact that I decided to go for a fairly- well-thought-out and tested OS: unix.)
If you write programs for linux today, you shouldn't have too many surprises when you just recompile them for Hurd in the 21st century. As has been noted (not only by me), the linux kernel is a miniscule part of a complete system: Full sources for linux currently runs to about 200kB compressed - full sources to a somewhat complete developement system is at least 10MB compressed (and easily much, much more). And all of that source is portable, except for this tiny kernel that you can (provably: I did it) re-write totally from scratch in less than a year without having /any/ prior knowledge.
In fact the /whole/ linux kernel is much smaller than the 386-dependent things in mach: i386.tar.Z for the current version of mach is well over 800kB compressed (823391 bytes according to nic.funet.fi). Admittedly, mach is "somewhat" bigger and has more features, but that should still tell you something.
Linus
Linus Benedict Torvalds
From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)Newsgroups: comp.os.minixSubject: Apologies (was Re: LINUX is obsolete)Date: 30 Jan 92 15:38:16 GMTOrganization: University of Helsinki
In article <1992Jan29.231426.20469@klaava.Helsinki.FI> I wrote:
Well, with a subject like this, I'm afraid I'll have to reply.
And reply I did, with complete abandon, and no thought for good taste and netiquette. Apologies to ast, and thanks to John Nall for a friendy "that's not how it's done"-letter. I over-reacted, and am now composing a (much less acerbic) personal letter to ast. Hope nobody was turned away from linux due to it being (a) possibly obsolete (I still think that's not the case, although some of the criticisms are valid) and (b) written by a hothead :-)
Linus "my first, and hopefully last flamefest" Torvalds
Unhappy campers
Andy Tanenbaum
From: ast@cs.vu.nl (Andy Tanenbaum)Newsgroups: comp.os.minixSubject: Unhappy campersDate: 3 Feb 92 22:46:40 GMTOrganization: Fac. Wiskunde &h; Informatica, Vrije Universiteit, Amsterdam
I've been getting a bit of mail lately from unhappy campers. (Actually 10 messages from the 43,000 readers may seem like a lot, but it is not really.) There seem to be three sticking points:
- Monolithic kernels are just as good as microkernels
- Portability isn't so important
- Software ought to be free
If people want to have a serious discussion of microkernels vs. monolithic kernels, fine. We can do that in comp.os.research. But please don't sound off if you have no idea of what you are talking about. I have helped design and implement 3 operating systems, one monolithic and two micro, and have studied many others in detail. Many of the arguments offered are nonstarters (e.g., microkernels are no good because you can't do paging in user space-- except that Mach DOES do paging in user space).
If you don't know much about microkernels vs. monolithic kernels, there is some useful information in a paper I coauthored with Fred Douglis, Frans Kaashoek and John Ousterhout in the Dec. 1991 issue of COMPUTING SYSTEMS, the USENIX journal). If you don't have that journal, you can FTP the paper from ftp.cs.vu.nl (192.31.231.42) in directory amoeba/papers as comp_sys.tex.Z (compressed TeX source) or comp_sys.ps.Z (compressed PostScript). The paper gives actual performance measurements and supports Rick Rashid's conclusion that microkernel based systems are just as efficient as monolithic kernels.
As to portability, there is hardly any serious discussion possible any more. UNIX has been ported to everything from PCs to Crays. Writing a portable OS is not much harder than a nonportable one, and all systems should be written with portability in mind these days. Surely Linus' OS professor pointed this out. Making OS code portable is not something I invented in 1987.
While most people can talk rationally about kernel design and portability, the issue of free-ness is 100% emotional. You wouldn't believe how much [expletive deleted] I have gotten lately about MINIX not being free. MINIX costs $169, but the license allows making two backup copies, so the effective price can be under $60. Furthermore, professors may make UNLIMITED copies for their students. Coherent is $99. FSF charges >$100 for the tape its "free" software comes on if you don't have Internet access, and I have never heard anyone complain. 4.4 BSD is $800. I don't really believe money is the issue. Besides, probably most of the people reading this group already have it.
A point which I don't think everyone appreciates is that making something available by FTP is not necessarily the way to provide the widest distribution. The Internet is still a highly elite group. Most computer users are NOT on it. It is my understanding from PH that the country where MINIX is most widely used is Germany, not the U.S., mostly because one of the (commercial) German computer magazines has been actively pushing it. MINIX is also widely used in Eastern Europe, Japan, Israel, South America, etc. Most of these people would never have gotten it if there hadn't been a company selling it.
Getting back to what "free" means, what about free source code? Coherent is binary only, but MINIX has source code, just as LINUX does. You can change it any way you want, and post the changes here. People have been doing that for 5 years without problems. I have been giving free updates for years, too.
I think the real issue is something else. I've been repeatedly offered virtual memory, paging, symbolic links, window systems, and all manner of features. I have usually declined because I am still trying to keep the system simple enough for students to understand. You can put all this stuff in your version, but I won't put it in mine. I think it is this point which irks the people who say "MINIX is not free," not the $60.
An interesting question is whether Linus is willing to let LINUX become "free" of his control. May people modify it (ruin it?) and sell it? Remember the hundreds of messages with subject "Re: Your software sold for money" when it was discovered the MINIX Centre in England was selling diskettes with news postings, more or less at cost?
Suppose Fred van Kempen returns from the dead and wants to take over, creating Fred's LINUX and Linus' LINUX, both useful but different. Is that ok? The test comes when a sizable group of people want to evolve LINUX in a way Linus does not want. Until that actually happens the point is moot, however.
If you like Linus' philosophy rather than mine, by all means, follow him, but please don't claim that you're doing this because LINUX is "free." Just say that you want a system with lots of bells and whistles. Fine. Your choice. I have no argument with that. Just tell the truth.
As an aside, for those folks who don't read news headers, Linus is in Finland and I am in The Netherlands. Are we reaching a situation where another critical industry, free software, that had been totally dominated by the U.S. is being taken over by the foreign competition? Will we soon see President Bush coming to Europe with Richard Stallman and Rick Rashid in tow, demanding that Europe import more American free software?
Andy Tanenbaum (ast@cs.vu.nl)
Fred Fish
From: fnf@fishpond.uucp (Fred Fish)Newsgroups: comp.os.minixSubject: Re: Unhappy campersDate: 4 Feb 92 20:57:40 GMTOrganization: Amiga Library Distribution Services
In article <12667@star.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) writes:
While most people can talk rationally about kernel design and portability, the issue of free-ness is 100% emotional. You wouldn't believe how much [expletive deleted] I have gotten lately about MINIX not being free. MINIX costs $169, but the license allows making two backup copies, so the effective price can be under $60. Furthermore, professors may make UNLIMITED copies for their students. Coherent is $99. FSF charges >$100 for the tape its "free" software comes on if you don't have Internet access, and I have never heard anyone complain. 4.4 BSD is $800. I don't really believe money is the issue. Besides, probably most of the people reading this group already have it.
The distribution cost is not the problem. As you've noted, nobody complains about the FSF's distribution fee being too high. The problem, as I see it, is that there is only one legal source for for the software for people that simply want a working release. And from watching the minix group since minix first became available, my impression is that nobody enjoys dealing with PH for a whole host of reasons.
I think the real issue is something else. I've been repeatedly offered virtual memory, paging, symbolic links, window systems, and all manner of features. I have usually declined because I am still trying to keep the system simple enough for students to understand. You can put all this stuff in your version, but I won't put it in mine. I think it is this point which irks the people who say "MINIX is not free," not the $60.
If PH was not granted a monopoly on distribution, it would have been possible for all of the interested minix hackers to organize and set up a group that was dedicated to producing enhanced-minix. This aim of this group could have been to produce a single, supported version of minix with all of the commonly requested enhancements. This would have allowed minix to evolve in much the same way that gcc has evolved over the last few years. Sure there are variant versions of gcc, but most of the really good enhancements, bug fixes, etc are eventually folded back into a master source base that future distributions derive from. Thus you would have been left in peace to continue your tight control over the educational version of minix, and everyone else that wanted more than an educational tool could put their energies into enhanced-minx.
The primary reason I've never gotten into using minix, after the initial excitement of hearing about it's availability way back when, is that I have no interest in trying to apply random patches from all over the place, sort out the problems, and eventually end up with a system that does what I want it to, but which I can't pass on to anyone else.
The test comes when a sizable group of people want to evolve LINUX in a way Linus does not want. Until that actually happens the point is moot, however.
Where is the sizeable group of people that want to evolve gcc in a way that rms/FSF does not approve of?
Where is the sizeable group of people that want to evolve emacs in a way that rms/FSF doesn't approve of?
I'd say that if the primary maintainers of a large piece of useful, freely redistributable, software are at all responsive to incorporating useful enhancements and acting as the central repository and clearing house for the software, then these splinter groups simply do not have sufficient motivation to form. Having a single source for the software, and having the primary maintainer(s) be unresponsive to the desires of a large group of users, is the catalyst that causes these sorts of pressures; not the freedom of the software.
-Fred
-- |\/ o\ Fred Fish, 1835 E. Belmont Drive, Tempe, AZ 85284, USA|/\__/ 1-602-491-0048 {asuvax,mcdphx,cygint,amix}!fishpond!fnf
Andy Tanenbaum
From: ast@cs.vu.nl (Andy Tanenbaum)Newsgroups: comp.os.minixSubject: Re: Unhappy campersDate: 5 Feb 92 23:23:26 GMTOrganization: Fac. Wiskunde &h; Informatica, Vrije Universiteit, Amsterdam
In article <205@fishpond.uucp> fnf@fishpond.uucp (Fred Fish) writes:
If PH was not granted a monopoly on distribution, it would have been possible for all of the interested minix hackers to organize and set up a group that was dedicated to producing enhanced-minix. This aim of this group could have been to produce a single, supported version of minix with all of the commonly requested enhancements. This would have allowed minix to evolve in much the same way that gcc has evolved over the last few years.
This IS possible. If a group of people wants to do this, that is fine. I think co-ordinating 1000 prima donnas living all over the world will be as easy as herding cats, but there is no legal problem. When a new release is ready, just make a diff listing against 1.5 and post it or make it FTPable. While this will require some work on the part of the users to install it, it isn't that much work. Besides, I have shell scripts to make the diffs and install them. This is what Fred van Kempen was doing. What he did wrong was insist on the right to publish the new version, rather than diffs against the PH baseline. That cuts PH out of the loop, which, not surprisingly, they weren't wild about. If people still want to do this, go ahead.
Of course, I am not necessarily going to put any of these changes in my version, so there is some work keeping the official and enhanced ones in sync, but I am willing to co-operate to minimize work. I did this for a long time with Bruce Evans and Frans Meulenbroeks.
If Linus wants to keep control of the official version, and a group of eager beavers want to go off in a different direction, the same problem arises. I don't think the copyright issue is really the problem. The problem is co-ordinating things. Projects like GNU, MINIX, or LINUX only hold together if one person is in charge. During the 1970s, when structured programming was introduced, Harlan Mills pointed out that the programming team should be organized like a surgical team--one surgeon and his or her assistants, not like a hog butchering team--give everybody an axe and let them chop away.
Anyone who says you can have a lot of widely dispersed people hack away on a complicated piece of code and avoid total anarchy has never managed a software project.
Where is the sizeable group of people that want to evolve gcc in a way that rms/FSF does not approve of?
A compiler is not something people have much emotional attachment to. If the language to be compiled is a given (e.g., an ANSI standard), there isn't much room for people to invent new features. An operating system has unlimited opportunity for people to implement their own favorite features.
Andy Tanenbaum (ast@cs.vu.nl)
Linus Benedict Torvalds
From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)Newsgroups: comp.os.minixSubject: Re: Unhappy campersDate: 6 Feb 92 10:33:31 GMTOrganization: University of Helsinki
In article <12746@star.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) writes:
If Linus wants to keep control of the official version, and a group of eager beavers want to go off in a different direction, the same problem arises.
This is the second time I've seen this "accusation" from ast, who feels pretty good about commenting on a kernel he probably haven't even seen. Or at least he hasn't asked me, or even read alt.os.linux about this. Just so that nobody takes his guess for the full thruth, here's my standing on "keeping control", in 2 words (three?):
I won't.
The only control I've effectively been keeping on linux is that I know it better than anybody else, and I've made my changes available to ftp-sites etc. Those have become effectively official releases, and I don't expect this to change for some time: not because I feel I have some moral right to it, but because I haven't heard too many complaints, and it will be a couple of months before I expect to find people who have the same "feel" for what happens in the kernel. (Well, maybe people are getting there: tytso certainly made some heavy changes even to 0.10, and others have hacked it as well)
In fact I have sent out feelers about some "linux-kernel" mailing list which would make the decisions about releases, as I expect I cannot fully support all the features that will /have/ to be added: SCSI etc, that I don't have the hardware for. The response has been non-existant: people don't seem to be that eager to change yet. (well, one person felt I should ask around for donations so that I could support it - and if anybody has interesting hardware lying around, I'd be happy to accept it :)
The only thing the copyright forbids (and I feel this is eminently reasonable) is that other people start making money off it, and don't make source available etc... This may not be a question of logic, but I'd feel very bad if someone could just sell my work for money, when I made it available expressly so that people could play around with a personal project. I think most people see my point.
That aside, if Fred van Kempen wanted to make a super-linux, he's quite wellcome. He won't be able to make much money on it (distribution fee only), and I don't think it's that good an idea to split linux up, but I wouldn't want to stop him even if the copyright let me.
I don't think the copyright issue is really the problem. The problem is co-ordinating things. Projects like GNU, MINIX, or LINUX only hold together if one person is in charge.
Yes, coordination is a big problem, and I don't think linux will move away from me as "head surgeon" for some time, partly because most people understand about these problems. But copyright /is/ an issue: if people feel I do a bad job, they can do it themselves. Likewise with gcc. The minix copyright, however, means that if someone feels he could make a better minix, he either has to make patches (which aren't that great whatever you say about them) or start off from scratch (and be attacked because you have other ideals).
Patches aren't much fun to distribute: I haven't made cdiffs for a single version of linux yet (I expect this to change: soon the patches will be so much smaller than the kernel that making both patches and a complete version available is a good idea - note that I'd still make the whole version available too). Patches upon patches are simply impractical, especially for people that may do changes themselves.
Where is the sizeable group of people that want to evolve gcc in a way that rms/FSF does not approve of?A compiler is not something people have much emotional attachment to. If the language to be compiled is a given (e.g., an ANSI standard), there isn't much room for people to invent new features. An operating system has unlimited opportunity for people to implement their own favorite features.
Well, there's GNU emacs... Don't tell us people haven't got emotional attachment to editors :)
Linus