67

Even though text-based terminals still see specialty use cases, modern general-purpose computers generally run graphical software and have a graphical user interface (GUI). This includes everything from low-end cell phones and some computer peripherals like printers, to fairly high-end servers.

I'd like for computers to be roughly on par technically with what we have today, but with user interfaces that are predominantly text-based. It's okay if these computers work with text blocks and things like that (for example, like how the IBM 5250 series of terminals worked), but except for graphically oriented work such as image editing, there should be minimal graphics.

Given that in our world, personal computers started becoming graphical pretty much as soon as they were powerful enough to run a graphical user interface at acceptable speeds, and some even earlier, how can I reasonably explain that GUIs never became mainstream?

Note that these computers need not be expert-only systems; I just want their interfaces to be predominantly text-based rather than predominantly graphical as is the case today in our world.

Also, to clarify, since there seems to be widespread confusion about this: Lack of a graphical user interface does not imply a lack of graphical capability. Take the original IBM PC model 5150 as an example; with the exception of those equipped only with a MDA graphics card, the software running on those often used text-based data entry with graphical visualization modes (what we in modern terms might call more or less accurate "print preview"). For example, something similar to the early versions of Microsoft Word for DOS or how early versions of Lotus 1-2-3 used different graphics cards and monitors to display data and graphs. Instead of thinking "no graphics at all", think "graphics only as add-ons to text, rather than as a primary user interaction element".

And since lots of answers imply that the only alternatives are pure command-line based interfaces and GUIs, let me remind you of tools like Norton Commander. I used Norton Commander back in the late 1980s and early 1990s, and still use look-alikes such as Midnight Commander to this day, and can guarantee that those can provide a perfectly useful environment for file management and launching applications that do not in any way depend on more than a text console. There is even a general term for these; Text-based User Interface, or TUI.

user
  • 28,950
  • 16
  • 108
  • 217
  • 1
    Comments are not for extended discussion; this conversation has been moved to chat. – Monica Cellio Dec 08 '16 at 02:58
  • It is also called "Pseudo-graphical user interface". – Vi. Dec 08 '16 at 13:53
  • Actually, many (most?) fairly high-end servers do not run GUIs. GUIs are usually assumed for client systems that connect to servers. Almost every "high-end server" I've worked on in the past 40+ years was minus a GUI. (Note, though, that functions such as X server/X windows or Java RAWT, etc.,are often available from servers, even if the servers themselves might not have native graphics capability.) – user2338816 Dec 09 '16 at 13:19
  • Since you stated "these computers need not be expert-only systems I don't think it is a viable scenario. I mean think of the main uses of computers today. Their main purpose is the consumption and editing of (multi)media. No GUI means no desktop publishing, no movie editing or watching, no web (as we know it), so most likely most of the population would not be interested in computers. Only experts, people processing huge amounts of data, and some DIY geeks. – mg30rg Dec 09 '16 at 15:29
  • @mg30rg TV sets for many decades did not have a GUI (and one can easily argue whether GUIs of modern TVs are "discoverable" or "easy to use"). The web is predominantly text-based, even though there are graphical elements. I think you are making the same flawed assumption that many have already made: Just because it's graphical doesn't mean it must be controlled or exposed through a GUI. Also note the passage in the question that "except for graphically oriented work* such as image editing, there should be minimal graphics"* (emphasis added). – user Dec 09 '16 at 15:31
  • @MichaelKjörling "TV sets for many decades did not have a GUI." is not true. When you switch to a tv channel, and see the logo of the channel in the corner of the screen, that is already a GUI. (Not a very advanced one though.) A GUI doesn't automatically mean a mouse. The buttons on the old TV sets were the input device, and the picture on the screen was the output. – mg30rg Dec 09 '16 at 15:34
  • @MichaelKjörling Graphic output without a GUI is for example the POV light tracer program. You could edit all your models in text (even by copy CON), then run your ray tracer and get an avi as result. If your actions have a direct graphic result (like changing a tv channel), it is indeed a GUI. – mg30rg Dec 09 '16 at 15:37
  • @mg30rg You are a few decades too modern. Original TV sets at best had a channel selector knob. As in a literal knob that you turned to select the frequency the TV receiver operated at. In the early 1990s, we had a TV at home that had buttons for selecting which of a dozen or so preset frequencies to use, where the frequency was set using a small physical knob. And I would still argue that watching a movie counts as "graphically oriented" and thus fits within the exception already allowed for in the question itself. – user Dec 09 '16 at 15:39
  • @MichaelKjörling "When you switch to a tv channel, and see the logo of the channel in the corner of the screen" - Exactly what part of this comment implies not using a knob? – mg30rg Dec 09 '16 at 15:46
  • @mg30rg It depends on the TV channel you are switching to broadcasting their logo at the time. There is any number of reasons why such a logo might not be present at the time. Why do you think later TV sets added a configurable on-screen display that displays an identification of the channel you just switched the TV set to? – user Dec 09 '16 at 15:48
  • 2
  • @MichaelKjörling You could borrow from Fallout 4, in that technology moved in a different direction than the miniaturization of computer systems, which would drastically slow down advancements in that field. The first commercial GUI was not released in our universe until 1985, so imagine tech development branching off right after the invention of the integrated circuit in 1958, and going in a different direction. By 1985 would tech be at the level it would need to be for Apple to release its GUI? – NZKshatriya Dec 11 '16 at 06:30
  • @NZKshatriya I'm not sure where you got 1985 from as when commercial GUIs were released in our universe; the Macintosh was introduced in January 1984, and Windows 1.0 (hardly a groundbreaking commercial success) was introduced in November 1985. In fact, I believe that the early Macintosh was more of a commercial success than the early Microsoft Windows. – user Dec 11 '16 at 11:31
  • Security paranoia. A text-based interface has fewer degrees of freedom, and can be accomplished with less code. You might even imagine that lowercase 'L', the vertical bar '|', the number '1', etc. are differentiated more strongly in a finite and easily distinguishable set of symbols. A society concerned with hacking and the potential for deception would consider a limited medium like a text terminal to be easier to audit and not be fooled by, and this prioritization might lead to GUIs being distrusted. – HostileFork says dont trust SE Dec 11 '16 at 16:38
  • @MichaelKjörling I was a year off going from memory at 1am or so >.< – NZKshatriya Dec 11 '16 at 19:42
  • 1
    Question seems too specific, so it seems like you're wanting to examine a potential reaction to eventual introduction of GUIs or some related event. One difficulty is that detailed graphics is an almost necessary adjunct to development of many technologies resulting in "modern computers". Engineering diagrams, CAD/CAM, etc., lead naturally to manipulation of graphic elements; and inclusion of those methods in UIs fairly naturally follows. Engineering modern systems is hard. – user2338816 Dec 12 '16 at 02:37
  • Is it possible that more potential users in this world have disabilities that make using a GUI difficult? My job is all about documenting GUIs in a way that screen-readers*-using-keyboards can use them. Where the focus is MATTERS! (The screen reader we focus on where I work is JAWS). Possibly look at WCAG to get a sense of how much easier text-only is for accessibility. – April Salutes Monica C. Mar 13 '19 at 13:56

36 Answers36

76

As almost anyone who ever used shell would say, a text based UI is much more comfortable, fast, easy to develop and just BETTER. The big problem, though, is that it's a language you have to know prior to doing anything with your computer. This is the main advantage of a GUI.

So I think what you should consider is a way to explain why computers can always presume that the users "speak their language". I see a few options:

  • Computers started out as a very elitist technology, and knowing the language is a kind of status symbol. This would give people the motivation to learn, and developers the motivation not to appeal to less-sophisticated audiences, because that would ruin their brand. soon, the language is just common knowledge.
  • The language in the world is in the first place very accurate and structured. There is always exactly one way to say everything. (I think this could be very interesting to develop, but also quite hard)
  • The language of the computers either developed very fast or co-evolved with the human understanding of it, i.e. the computers would "learn" a new word, this would be made famous and everyone would know this new word.
Dotan
  • 3,129
  • 1
  • 16
  • 20
66

One simple change:

Never invent a Computer Mouse

No matter how comforting a graphical user interface (GUI) is, it wouldn't be nearly as comfortable and useful without the invention of the computer mouse (and later touch interfaces).

While text interfaces stem from times and are still designed for use with only/primarily keyboards, you cannot comfortably nor reliably make use of any GUI without having a mouse or any other 'pointer' available to select things and interact with them.

The invention of the computer mouse and thus the pointer brought with it the era of pointy-clicky, a derogative term referring to virtual buttons and interactable areas that are fully virtual as opposed to the hardware reality of a keyboard. Now instead of having to work with a limited set of input functionality the only limit the amount of pixels a display can show (if you abuse scrolling not even the screen-size will be a hindrance for your mad interface experiments).

The combination of mouse/touch and GUI allows to cut away a layer of indirection that will always be around when you have to type in something and confirm your command before anything can happen. Even though you could react to every keystroke directly, there will be a finite set of interactions per program state, while there's a potential unlimited set of interactions that can be made with mouse/touch.


Elaboration on the evolution of your interfaces:

Now even if you do only have an indirect way of interaction, GUIs will eventually emerge. Although your GUIs will be massively different from the GUIs we are used to (and have come to hate love).
The eventual GUIs will be more of a graphically enhanced text interface (GETI) and the graphics will be used to display things such as video, images, make some nice backgrounds or gradients, etc. the classic prompt will be unlikely to disappear.

Eventually it is also likely that voice-input becomes more common. Voice-input will simply be an addition and pseudo-replacement for the keyboard but cannot fully replace it unless voice-processors become way better than they are in our timeline or your software becomes more lenient and outfitted with pseudo-intelligence that can guess what you're intending to do and assist/clarify by asking you additional input when needed.

dot_Sp0T
  • 12,111
  • 3
  • 54
  • 105
  • 28
    Clever but I'm not too sure about this. Touchpads can be made; touch screens can be developed instead; or, at the most basic levels, there can be something like arrow keys to move the cursor and a spacebar to select. I wouldn't say no mice means no interfaces; we would just replace the mice. – Zxyrra Dec 05 '16 at 23:41
  • 14
    An interface is not just about input, it's also about output. Even without specialized input devices, people will still develop graphs for displaying information. –  Dec 06 '16 at 00:18
  • 4
    @Zxyrra but what's the impetus to "invent" touch screen? and currently we have a path to touch - Console => Gui => Mouse => Touch... with about 100 revisions of GUI. Hell... look at all the issues there are going from GUI to Touch... I couldn't imagine going from keyboard to touch without the same or worse issues. – WernerCD Dec 06 '16 at 00:44
  • 1
    @WernerCD pseudo graphical output applications exist for CLIs. For example, htop for Linux, or vtop. Even without using a GUI to input data or commands, most modern OSes provide a lot of graphical output options via their CLIs, and as pointed out by other comments, it's hard to come up with a plausible way this doesn't develop into some form of GUI for mass market use. Even with just those two examples, the utility of a touch screen is pretty obvious, and it's arguable that they represent a GUI, even though they're CLI applications. – HopelessN00b Dec 06 '16 at 01:00
  • 1
    Trackball was invented in 1941; light pen in 1955. But the general point of the answer is valid. Without things like these, probably would never have a GUI. – WGroleau Dec 06 '16 at 03:30
  • 12
    My Nintendo 3DS has GUI but no mouse. PS and Xbox all have GUI but no mouse. Mouse is handy for GUIs, especially at PCs. There are however many cases where GUI can work without mouse just fine. The GUIs could look different, but they would still be there. Simply not inventing mouse/touchpad/input device X won't do. – MatthewRock Dec 06 '16 at 10:37
  • 1
    No 80-es kids here? You forgot the joystick. And probably some other input devices. Not to mention uninvented or not widely known ones. – David Balažic Dec 06 '16 at 10:41
  • @MatthewRock your xbox (original, 360 and one) are all using a joystick and a simplified keyboard; your ps (one, two and three) are using the same mechanisms; your ps4 uses the same and in addition a sort-of trackpad; your gameboy (classic, colour, pocket, advance, advance SP) uses joysticks and a reduced keyboard; your Nintendo DS, DS Light, 3DS, 3DSi, etc. use the same and in addition two touch-screens... That's not an exhaustive list. I guess only listing the Computer Mouse and thinking of joysticks etc, as implicit mice is not enough to satisfy you, so I will maybe add another section... – dot_Sp0T Dec 06 '16 at 10:43
  • 1
    @dot_Sp0T but then we go from the science-fiction straight to the fantasy. Computers are made to be useful; chances that someone wouldn't think of making his life easier - especially computer scientist, these clever guys - are 0. – MatthewRock Dec 06 '16 at 10:45
  • @MatthewRock I am not sure I can follow you, but what I understand is: You are saying that there is no way to make sure there is no GUI - I agree, a GUI is a logical development step, the only thing we can really do is take steps into changing the GUIs from what we have today to something more texty-feely like the OP is hoping to achieve. Computer Scientists work primarily with text-interfaces and editors (at least according to what I do on a daily basis) even though they are opened on a 'desktop' I type words and commands and make them do things - there's a small step to having every1 do that – dot_Sp0T Dec 06 '16 at 10:49
  • 7
    Our GUI interface archetype is WIMP, which stands for "windows, icon, menus, pointer". It does not require a mouse to move the pointer. There are many WIMP interfaces using keypads or joysticks to move the pointer; and there are WIMP interfaces using lightpens or touchscreens as a direct pointer. The mouse can even be seen as a distraction on the way to "true" pointing using physical touch. It definitely isn't a prerequisite for a GUI. – Graham Dec 06 '16 at 12:32
  • 1
    @WernerCD "Man, I hate pressing the right arrow 88 times to get to the right position on this line. I wish I could just tap the screen to jump the cursor to exactly where I want to go or tap a text element to toggle a function on or off as if it were a switch or button." There you go. Inspiration to invent touchscreens. – Tophandour Dec 06 '16 at 16:04
  • 2
    @Tophandour actually that's inspiration-to-organize-text-in-this-editor-in-blocks-so-I-can-use-ctrl-and-right-arrow-to-jump-between-whole-elements, they look similar but are not the same; also by inventing the touch-screen you do not necessarily invent the pointy-clicky-GUI, you simply invent a new input method that will enhance current input methods – dot_Sp0T Dec 06 '16 at 16:06
  • @dot_Sp0T there will basically always be a case where tapping a position on a screen (1 action) requires less time or actions than using keyboard shortcuts. Let's say that ctrl-right moves you n positions to the right. What if you want to move n/2, n+1, n-1, etc? Point is, some command-line interfaces that I've used support using the mouse to move the cursor to a specific spot so in the absence of mice, I don't see why it would be an impossible leap to want to do the same with a touchscreen. – Tophandour Dec 06 '16 at 16:12
  • 2
    @HopelessN00b As an extension to this, early programs such as the Turbo C editor actually produced a text based GUI that did not require a mouse to use effectively. – Michael Dec 06 '16 at 19:27
  • 2
    Norton Commander: great GUI, required pretty much nothing but arrow keys, Enter and ESC. Mouse was not even supported until later versions. – Agent_L Dec 07 '16 at 13:24
  • You also need to avoid touch-balls, and the Textronix 4014 display had scroll-wheels to move the cursor left and right, and up and down. – Martin Bonner supports Monica Dec 08 '16 at 12:38
  • 1
    "Nobody had idea X" is a big problem; such ideas usually co-evolve. – Raphael Dec 09 '16 at 07:50
  • @Raphael it's less about not having an idea and more about not having it at that point in time / not making it into a device that gets used; I've got lots of ideas about cool and uncool things everyday but I do not make any of these because it seems futile or I do not know how to – dot_Sp0T Dec 09 '16 at 10:01
  • 1
    @Michael GUI applications that aren't well designed for keyboard use are infuriating. Mouse is the quick and simple discovery option - as soon as I'm familiar with the interface, I switch to controlling most things through the keyboard. The major exception is in most of the web, but Windows, Office, Visual Studio, Total Commander... the primary interaction method is keyboard, with mouse where I'm working on something where getting keyboard "muscle memory" isn't worth it, or where precise selection is simpler through the mouse. Mouse (and friends) is great, but keyboards are still essential. – Luaan Dec 09 '16 at 13:07
  • 1
    Touchscreen is not the same as GUI. When I was growing up my local library had touchscreen dumb terminals for accessing the catalogue which were 100% text-based. There were no keyboards. You selected Author, Title or Subject, then you would have something like 6 choices on each subsequent screen to gradually narrow down your selection. 6 levels would give you a choice of over 40 000 entries. – CJ Dennis Dec 10 '16 at 13:56
54

Slightly alternative answer.

You could have had a major breakthrough in voice recognition in the early days of the computer. The effect of this could be that interfacing would evolve around using voice and ear, as opposed to eyes and hands.

The added benefit of this is that you can continue using your hands and eyes to perform certain tasks (e.g. you're fixing a car and asking the computer for help in the mean time).

(This in turn means that no effort is put into developing GUI's for computers, but debugging/configuring might be done using a CLI)

dot_Sp0T
  • 12,111
  • 3
  • 54
  • 105
Deruijter
  • 655
  • 4
  • 8
  • Saw this comment on UX and I think your answer dovetails nicely. https://ux.stackexchange.com/questions/101990/why-are-terminal-consoles-still-used/102018#102018 – bob0the0mighty Dec 06 '16 at 16:04
  • 11
    I'm not so sure about that. Voice recognition has most of the disadvantages of CLI, with few of the advantages. The only real advantage you get is when you can't use your hands (or, to some extent, your eyes), or when you can't type very well. You'd pretty much need a fully capable expert system to make voice recognition work better than a GUI or even CLI. – Luaan Dec 06 '16 at 16:36
  • This would work well especially if computers could read text at a high level. Think about how futuristic computers were portrayed in "Alien", for example. The captain just wrote out complex questions to it. Star Trek, especially Next Generation, could do a lot just by talking to the computer AI. – Jason K Dec 07 '16 at 04:09
  • @Luaan I agree that there are disadvantages, you can't display a nice graph with audio for example. However if voice/ear starts out as the mainstream way of communicating with computers, it could hinder the development of advanced computer screens and the construction of software visualizations, since there is no market for it (yet). – Deruijter Dec 07 '16 at 08:56
  • I wasn't even comparing it to GUIs - just trivial CLIs. Even there voice recognition is a loser (again, unless sight/touch are impractical for some reason). GUIs (or "TUIs", if you want to keep them separate) blow it to bits. The most "realistic" approach would be what Jason suggested - if the computer could actually understand arbitrary human speech, it would mostly combine the good parts of CLI, GUI and voice, rather than mostly combining the bad parts of each :) Voice recognition isn't enough - you need expert systems, and flexible ones. – Luaan Dec 07 '16 at 11:50
  • Well it'd basically be a glorified command prompt, but it could indeed get people to rather talk to their computer than want to point at things. Thing is, complex voice recognition requires complex algorithms which require raw computation power. So unless you magick this breakthrough it wouldn't really be doable. – dot_Sp0T Dec 07 '16 at 14:55
  • Voice Recognition and some kind of vastly better AI-enabled user experience (handwavium engage) : Me: "Refactor Code for Better Cohesion and Less Coupling, and Write More Tests". Computer: "Complete. Would you like an update list of code-coverage statistics and meaningful quality metrics?" – Warren P Dec 08 '16 at 14:36
  • @WarrenP And now compare it to clicking the "Refactor Code for Better Cohesion and Less Coupling, and Write More Tests" button or pressing "Ctrl+R+R". You need pretty flexible commands to make voice recognition (and CLI - they really are almost identical) worth it. If you already have the AI, a GUI might still be a better option than voice control, depending on the task you actually want to do. – Luaan Dec 09 '16 at 13:18
  • Have you ever compared your times, reading a book versus listening to the unabridged audio book? Have you compared the time it takes you to type "ls" enter, skim the result, type "cd xyz" tab enter, skim the next result and then type "less intere" tab enter to pointing and clicking through a directory structure and to the time it takes you to describe what you are doing (in a way stupid enough for the computer. In case you don't know: very stupid). – Nobody Dec 09 '16 at 15:48
  • I'd like to see someone play quake, counter strike with voice command. And I'd like to hear the computer verbally describe even something as simple as a bar chart, let alone a war craft scene. I think GUIs are an inevitability. – Bohemian Dec 10 '16 at 16:41
  • @WarrenP If you could just tell the computer "Refactor Code for Better Cohesion and Less Coupling, and Write More Tests," what would be the point of you in that scenario? – Craig Tullis Dec 11 '16 at 06:17
  • @Nobody Reading a book? Much faster. Going through ls, cd? Much slower than a well done GUI. Try to time it one day, you might be surprised. Text feels faster, but rarely is. Don't forget that in a GUI, you have a choice between a keyboard and a mouse - and in a well-designed GUI, the transition is pretty seemless. I rarely point-and-click in TC/Explorer - both have their pros. But yes, doing the same thing in voice control is even slower, unless you get an AI and full-text search of everything - and by that point, the distinction barely makes any sense anyway. – Luaan Dec 12 '16 at 11:28
  • @Luaan Well, I admit I don't know much about better than standard / niche file GUIs. But comparing Nautilus (Gnome 3) and Explorer (Windows), I'm sure I'm way faster with cd/ls and lots of autocomplete. I think a well designed GUI (that is, which takes full advantage of both keyboard/mouse and the graphical display) could be faster, but I have yet to encounter such a GUI. – Nobody Dec 12 '16 at 11:49
30

The affirmation that "modern general-purpose computers generally run graphical software and have a graphical user interface (GUI)" is simply false. The vast majority of servers have no GUI; see "headless server". They live in rows upon rows of racks and can be accessed only over the network. The computers behind search engines, on-line storage services, web-based mail services, enterprise resource planning software, questions and answers boards such as this one, content management systems, the computers providing file, print and streaming services, and in general the computers which serve the interconnected documents forming the world-wide web do not have graphical user interfaces (with, of course, the rare exceptions expected from everything in IT). A better formulation would be "workstations (and gamestations) generally have GUIs"; workstations have generally had GUIs for a very long time. The windowing system in current Linux distribution is based on the X11R6 protocol, first released in 1994.

The first major class of mass-marketed applications which used full-screen graphics were games. Games ran in full screen graphical mode on the ZX Spectrum. The first GUI-based "killer applications" were desktop publishing and pre-press work.

The major problem I see with character-cell interfaces everywhere is multi-language support. A computer which can show very many thousands of different characters on a character-cell display can also show graphics on the same display -- a computer which can show 中华人民共和国 can certainly display graphics. And since it can display graphics, it will display graphics: some young student at a university somewhere will write a graphical interface and game over. Unless...

The only way to preserve character-cell interfaces for the masses is to make them compulsory; suppose that the domination of the computer industry by a big blue three-letter corporation had not been met with anti-trust challenges from the government of the greatest power in the world. Suppose that on the contrary that domination would have been enforced by the powers that be; no such thing as open-source operating systems like UNIX, no such thing as simple-minded operating systems like MS-DOS and the classic Mac OS; all computers run safe, secure and reliable operating systems like OS/360. Wouldn't we all be happy with the character-cell variant of the Common User Architecture?

AlexP
  • 88,883
  • 16
  • 191
  • 325
  • 9
    Lots of servers run server variants of Windows, and even Windows Server Core has a GUI (it's a very stripped down GUI, and is mostly used for displaying command line windows, but despite what Microsoft calls it, it's still a GUI, not text-based, at heart). Add to this just about every personal computer there is (which will generally run Windows, OS X, or Linux + X one way or another) and consider the computers in routers, microwave ovens, washing machines, cars and whatnot to be not general purpose, and I suspect my statement holds. – user Dec 05 '16 at 21:41
  • 3
    @MichaelKjörling: A large part of those Windows servers do not even have screens and keyboards attached... That's why Powershell is so much in fashion in the Windows Server world. But yes, RDP is a thing and quite a few Windows-based servers are accessed graphically over RDP. Still, many general-purpose computer do not have any kind of graphics software installed. – AlexP Dec 05 '16 at 22:30
  • 8
    Arguably, many of those servers are accessed through a GUI: a web browser, whether it's an end user visiting an online store or blog, or an admin accessing an admin control panel. And there's other similar GUIs, all your GUI email apps, your GUI chat apps, your GUI video streaming apps... GUI isn't limited to just locally hosted X11 or Aqua or Windows Shell; the apps within them can present GUIs for remote servers. There will of course be cases where a server really is exclusively accessed by users through text-only means, but headless server does not automatically mean GUI-less. – 8bittree Dec 06 '16 at 16:29
  • 1
    @MichaelKjörling Nay, that statement doesn't hold. There's pretty well-verified statistics out there on the numbers of computers, personal vs. data-center/server, and how many of those are running Linux vs. Windows vs. Mac OS X vs Solaris vs AIX, etc. And my router is reasonably general-purpose. Sure, it mostly does routing, but it's a Linux device doing various non-router things for me. This is of course moot relative to your question: headless servers may be the majority, but they're a technical niche in many ways, just more numerous. – mtraceur Dec 08 '16 at 07:01
  • 1
    Servers aren't general-purpose computers – noɥʇʎԀʎzɐɹƆ Dec 11 '16 at 00:00
  • @AlexP That's actually an interesting point. Powershell gained traction because it gives you better repeatability, auditing, automation... but it's on the decline again, because the GUI tools also evolved in the meantime. Nowadays, I can do the exact same deployment without (personally) writing a line of PowerShell, the same way on a hundred servers in a cluster. And of course, now with the Windows Server Nano (which has no GUI), Powershell (and friends) is coming back again. Interesting times :) – Luaan Dec 12 '16 at 11:30
29

I think that GUI are so popular because visual learners consists the majority of population. With 2 of every 3 people being visual learners they consist the largest market, same as most things are made for right handed people. If you make Auditory learners the majority of the population, fallowed by kinesthetic with visual learners distant third, the market will adapt and the GUIs will be expensive niche market.

enter image description here

I'm a programmer and I don't like text UI. I know very well how powerfull they are, I learned to be quite good with bash, and use it every day at work to administer our UNIX servers, but if I had choice I would allways choose GUIs. That's how my brain is wired. I learned to use Emacs but I always go for Atom & Visual Studio.

P.S. Image taken from Successfully Using Visual Aids in Your Presentation

slobodan.blazeski
  • 4,958
  • 2
  • 22
  • 43
  • 2
    I was going to answer this. Make your language easy to recognize and for computers to understand and voice terminals will be much more prolific from the start of computing. – Jorge Aldo Dec 06 '16 at 06:36
  • CLIs fall into the Visual sector of your diagram, the same as GUIs--they're all about the recognition and manipulation of symbols that you see on a screen. If auditory and kinesthetic learners were the vast majority of the population, I could imagine a lot more motivation for the development of voice interfaces and haptic interfaces, but I think a preference for CLIs requires a different explanation. – David K Nov 14 '17 at 23:21
25

Your world does not have pixel-capable screens. With the components readily available, one could be built only crudely, at impractically large sizes (billboard size or greater), and with large gaps in between the dots. But no hardware or software (ray-tracing, etc) was ever developed that would make good use of this, and no one except maybe sci-fi authors really sees much value in such a thing.

If all you have to make desktop monitors out of is arrays of seven segment displays, then you have a text-based user experience built into the hardware. If the monitors are literally made out of 7-segment displays (or something like them), and particularly if you bring in a historical/legal basis for that, then you don't really need any tortured argument about why they don't just draw pictures on the things, because the capability isn't there.

You can also offer some other side benefits of this that are off-limits to us in the real world. Like having the monitor be just another cheap USB device, or Bluetooth device, with virtually zero power consumption. And you can bring back ASCII art in a big way.

This conception of technology requires a divergence of technological development from the real world somewhere around 1900. Radio is in, television is out. Comic books, dime novels and penny dreadfuls are in, cinema is out. Old-fashioned seismometers and other machines that directly draw on paper are in. The advent of computers still happens, because this was done for reasons of code-breaking and mathematical research (Babbage, Zuse, others). Blinkenlights are in.

Cheap and accessible photography is out; most people can only afford one or two family portraits in their lifetimes, and it's all film based. But for the price, the quality standards are very high, and portraits are typically stereographic (gives more flavor for divergent technological progress).

Printers are very fancy, very cheap (and the ink is even cheaper!!), and very fast, with advanced typography capabilities, and paper is incredibly cheap and easily recycled. Even sophisticated book binding is a standard feature on a very affordable printer.

If you need a "nuclear option", further reinforce suspension of disbelief with copyright law. In your world, equipment manufacturers would be held liable for any device capable of showing a photograph or facsimile of a copyrighted oil painting. (If you go in this direction, have "the Betamax case" occur 100 years earlier, applied to single-frame film photography, and decided more or less in the opposite from real history. The real case was a 5-4 split decision!) Strictly control photography licenses on this basis, further accounting for the high price and therefore rarity and superior, exalted quality of photographs.

For all these reasons, no one has much motivation to develop technology capable of showing pictures, and the work it would take to match the analog capabilities with any digital graphical system would be far too high for amateurs to mount a successful attempt. Even serious efforts with serious budget would be perceived as crude toy projects, or worse, as illicit subterfuge, without any legitimate practical use.

All these background factors will hopefully reinforce the divergence away from pixel graphics and create a huge barrier to introducing it into your world. ("Such a monitor would require way too much power!" "Stereoscopy would be next to impossible!" "You would have to upset 100+ years of copyright law and legal precedent!" "Even simple line art would look like garbage!")

"There's way too much information to decode the Matrix. You get used to it, though. Your brain does the translating. I don't even see the code. All I see is blonde, brunette, redhead. Hey uh, you want a drink?"

wberry
  • 509
  • 4
  • 7
  • 3
    There is a variant of the 7-segment display called the 14 segment display that can display the full Latin alphabet. – Stig Hemmer Dec 06 '16 at 08:24
  • 1
    While most CRTs project a spot, it's possible to have the beam project other shapes, and some early displays for things like air traffic control displayed alphanumerics by selecting letter-shapes for the beams and flashing them at the required location. Such an approach would probably not require turning the beam on and off as quickly as would be necessary with a raster display. – supercat Dec 07 '16 at 17:44
  • 1
    -1 This proposal is simply not technically plausible - for example if "Cheap and accessible photography is out" then so is making integrated circuits via photolithography, which means that "computers" are stuck in the discrete component stone age. In similar ways, the whole dense-pixels-can't-be-done idea is entirely incompatible with anything approaching the computational density found in our world; if pixels are huge, then so are logic elements. – Chris Stratton Dec 09 '16 at 04:26
  • @ChrisStratton "with the components readily available". I only propose that certain things that could have been done, were not done. – wberry Dec 09 '16 at 23:17
  • The problem is that you are also proposing things that essentially require as supporting technologies the very things you propose didn't happen. You can have nobody choose to look at the equivalent of a display, but you'll have the technology to build them. – Chris Stratton Dec 10 '16 at 06:25
21

Make porn and video games not a thing.

Now who cares to make computers handle more graphics? Good luck on getting people to believe it.

Make mobile computers useful/desirable earlier.

If we had hand held computers that could do something useful or cool before anyone had gotten graphics running, or when graphics would have been battery prohibitive, text only could have become the standard way everyone uses computers.

Make programming much more popular

If most people write at least some of the programs they use and text is the (easiest) way to interact with them text will be popular. This could happen if copyright got out of control or people lost trust in distributed programs.

Make illiteracy or functional illiteracy a bigger issue.

You don't want to look like the only guy at the meeting who needs pictures, and you really don't want to imply you boss can't read.

15

I'm surprised so few people have touched on the possible cultural motivators that would limit/prevent the development of GUIs.

My first thought was (no pun intended), "iconoclasm".

In a world where iconoclastic religion holds sway, people will believe that GUIs are evil and/or degenerate. Words are important; unnecessary representation of things are an affront to God.

@Dotan Reis's idea regarding elitism has real potential too. If the early computer users were both rich AND smart, then a personality cult of computer-elitism would lead people to only ever want to use text-based interfaces.

Earl Jenkins
  • 521
  • 2
  • 4
  • 1
    This is a much stronger motivator for avoiding GUIs than any technical limitation. – barbecue Dec 08 '16 at 20:44
  • 1
    Iconoclasm powers editor wars. – noɥʇʎԀʎzɐɹƆ Dec 11 '16 at 00:04
  • 1
    Actually that makes sense. If in Germany games are being censored not to show some WW2 figures, Facebook censored Swedish gov video concerning breast cancer, so a developer in iconoclastic society would really be overcareful, not to have his program classified as only for adults. – Shadow1024 Jun 21 '17 at 14:01
13
  • Stop the push to put a computer on every desk; TUIs can be used by experts, but GUIs were all but required to make the jump from "specialist equipment" to "general use equipment."
  • Never see a capitalist-driven push to create a consumer workstation market (TUIs work for trained professionals, and don't demand a GUI)
  • Increase the culture of elitism towards computers; it has forever been a trend (although diminishing as time goes on) with computer/IT people to prefer more difficult means to prove oneself; many IT guys today "prefer" Linux, but can't provide a non-cardboard-cutout argument as to why. Command Line/Terminal being the same deal.
  • Hamstring the display market. Keep monitors primitive, mono-colored.
  • Introduce a terrible executed marketing ploy for GUIs; turn the consumers and market off the idea
  • Have major OS creators/communities view GUIs as inefficient and ineffective. More elitism.

...Basically kill the capitalist market drive, and introduce bad press and elitism to run GUIs away.

Ranger
  • 17,489
  • 5
  • 66
  • 110
  • 7
    But terminals are better, they're closer to the software and often provide access to more functionality easier than a GUI does – dot_Sp0T Dec 05 '16 at 20:55
  • 3
    @dot_Sp0T TUIs will always require a steeper learning curve and make features and functionality less obvious. They're less inviting to new users, require more investment, and are less intuitive. Those are the reasons GUIs took over. Also a big reason why touch controls on mobile devices took over. TUIs aren't better than GUIs, but GUIs also aren't better than TUIs. Which to use depends on the environment, the user, the technology, and the culture. – Ranger Dec 05 '16 at 20:58
  • 3
    This answer is absolutely biased and not based in fact. text based interfaces are demonstrably better for a lot of tasks then graphical ones. Composeability and automation are not given with GUis, yet come naturally to text-based UIs. GUis are the more accessible tools, but certainly not the more useful or more powerful tools. – Polygnome Dec 06 '16 at 00:09
  • 2
    There is nothing "more difficult" about linux (their are many versions of linux and I can't speak for all of them), it is a simple and effective OS. If I was selecting an OS for someone with no computer knowledge then linux mint would be a good choice because of its tendency to carry on working once set up. (setting up quite simple) Linux tends to make it easier to add your own code to the OS and do certain advanced actions that are not available on other systems. That's why many experts use it. Its just not what your used to. – Donald Hobson Dec 06 '16 at 01:24
  • @DonaldHobson I didn't mention any OS specifically, and yes I agree that multiple distros of Linux with GUIs make great, user-friendly OSes. On the other hand I wouldn't hand a terminal-only distro like Linux Arch to your average stay-at-home-parent and expect them to enjoy their experience. – Ranger Dec 06 '16 at 04:16
  • @Polygnome Eh? When did I smack talk TUIs? I use Terminal/PowerShell myself. They aren't good at being user-friendly to those non-tech savvy, and they aren't intuitive to the uneducated. A user educated, confident, and intelligent can make great use of a Text User Interface, but a GUI doesn't require any of those three characteristics of a user to be functional. – Ranger Dec 06 '16 at 04:18
  • @NexTerren "it has forever been a trend (although diminishing as time goes on) with computer/IT people to prefer more difficult means to prove oneself; many IT guys today "prefer" Linux, but can't provide a non-cardboard-cutout argument as to why" That part is condescending and extremely inappropriate. there are a lot of reasons to use other OSes then windows. Just because you don't seem to understand those reasons doesn't mean thy don't exist and aren't valid. And you throw in the "terminal/TUI" the same. Its simply not true. read the linked question! it has lots of pros of terminals/shells. – Polygnome Dec 06 '16 at 08:16
  • @Polygnome Ah, I did mention Linux, my bad on that. That being said my comment wasn't condescending. There are valid reasons to pick one OS, hardware vendor, etc, over another but so many people can't explain why they're so loyal to one technology without turning to Google. It doesn't matter how technical (choices of editors, distros, programming languages) or non-technical (which new phone), people all-too-often have strong opinions of technology without an actual thought-out(/researched)-justification. – Ranger Dec 06 '16 at 16:15
13

Search, don't sort.

Google Desktop made redundant 90% of the Windows GUI in 2004.

Apple implemented similar features in Vanilla OSX at a similar time.

No more clicking through sub folders trying to remember where you stored something. Simply remember some fact about it: Words in the title, words in the content, last modified date. Enter some of those parameters as a search, and the file appears instantly.

In terms of what you could do to move from "we don't use GUIs much" to "we don't use GUIs", either improve A.I. search capabilities, or send Microsoft bankrupt.

With MS out of the way, your computer's GUI would look like the Google home page. Blank white space, a single text box for input. At that point, it's not really a GUI any more.

Scott
  • 3,130
  • 12
  • 16
  • But if you have more than one file matching the criteria, you need a GUI to select the one you want. But suppose you find your holiday photos OK. Now how do you edit them without a GUI? How about spreadsheets and word processors? I'm old enough to have used spreadsheets and word processors before there were mouse-based interfaces, and there's a good reason WYSIWYG editing killed non-WYSIWYG - if you care what your document looks like, going round the loop of "render, not quite what I wanted, render again, too far, render again" is a painful waste of time. – Graham Dec 06 '16 at 12:44
  • 3
    @Graham it seems you never used latex before. Photo editing will be painful though – Cem Kalyoncu Dec 06 '16 at 19:59
  • 1
    @Graham unfortunately, LaTeX's rendering loops are sometimes quite annoying, but for many types of document it's still by far more efficient than anything you could do with WYSIWYG, especially if you're concerned with accurate design. — A spreadsheet is just a poor man's replacement for a proper data language. — With multimedia manipulation you're undeniably right, you don't get around a GUI... though even here there's a certain trend towards text-based editing, with ever more scripting capabilities built into CADs/NLEs/DAWs and even some innovative pure graphics programming languages. – leftaroundabout Dec 06 '16 at 21:27
  • re: spreadsheets - it's the same situation as with OS - kill off MS and the competitors will succeed with something more practical but less pretty. Wolfram in this case. – Scott Dec 06 '16 at 22:58
  • Search and sort are not user interfaces, they are tasks you perform IN a user interface. Both can be done in either GUI or CLI environments, and neither addresses all of the many other functions required in a UI. "Type something in a box" is also something done in both GUI and CLI environments. – barbecue Dec 08 '16 at 20:42
  • 1
    @barbecue maybe I didn't make my point clear enough. Yes, the Google Homepage is a GUI. It's a textbox and a button. But you could literally replace it with a command line text user interface and it would be exactly the same. It doesn't need to be a GUI, and doesn't use any features that a GUI is good at that a TUI isn't – Scott Dec 09 '16 at 02:01
  • @scott You're right that it could be a TUI instead of a GUI, but there's nothing about this that actively discourages the use of a GUI. If either approach works equally well, why is one preferred over the other? – barbecue Dec 10 '16 at 02:18
8

An important thing to consider here is that once you've gotten past the steeper learning curve, working with text-based input is frequently much easier than using a GUI.

An example: Suppose I have a directory containing a few thousand files, scattered across various subdirectories. I want to sort them out into separate directories based on various criteria. Let's say I want to move all the files starting with "foo-" and ending in ".log" that were created in the last day.

In a GUI, the most efficient way I can do that is probably to sort the files by file extension, then go into each subdirectory, find the block of files starting in "foo-" and ending in ".log", then right click on each individually, open up properties, check the modified date, then drag it into the new directory if it was modified in the last day. Then I move to the next file and do the same thing. And hope I don't make any mistakes while manually doing this a few hundred times. And in practice, if all I have is a GUI, I'm just not going to reorganize those files because there's no way I'm going through all that.

With a command line, I type find ! -type d -name 'foo-*.log' -mtime -1 -exec mv '{}' 'other_directory/{}' + and I'm done in 5 seconds. And in practice, it takes about 5 minutes because I don't use the -mtime argument that often and I need to look it up in the manual real quick (which consists of typing man find, then /modified to find the right section).

For most tasks, the difference isn't quite that extreme, but the command line is almost always the more powerful option. The command line version certainly looks more complicated (and to be fair, it is), but once I learn it, I can get things done so much faster than I could otherwise. Aside from my web browser, the only reason I use a GUI at work is so I can keep multiple terminals on the screen at the same time. Unless the task is specifically graphical in nature, a GUI just feels like a toy to me.

Now consider your requirement that the systems not be "Expert-only". I won't deny that right now, proficiency with the command line is generally expert-only, but think about average difference in computer literacy between a 14 year old and a 74 year old. The adult has had just as much time to learn the skills, and yet they struggle with it. But the kid grew up with this stuff and finds that it comes naturally. If you create a society in which most people learn how to use a command line as an "Experts-only" skill, then in a generation or two, it'll just be another trivial skill that everyone learned as a kid.

Edit: A couple people have mentioned GUIs that can filter files according to modification date, so here's a slightly more complicated example. This will sort all .log files into directories of the form 'logs/2017-05-20/' based on their modification time, creating the directories as needed.

find ! -type d -name '*.log' -exec bash -c \
"export DIR=\$(date +logs/%F -d\$(stat -c @%Y '{}')); mkdir -p \$DIR; mv '{}' \$DIR/\$(basename '{}')" \;
Ray
  • 1,265
  • 9
  • 15
  • 1
    I think your case is example of bad GUI [bad for specific task], not of command line superiority. I could do your kind copying in Windows Commander GUI easily in 10 seconds. – Arvo Dec 07 '16 at 08:38
  • The example is a bad one even using basic Windows. Open Explorer. Go to the directory you want to search. Type "foo*.log" in the search box. It will give you the option to add a search modifier, one of them the last time the file was modified, and you can select a date range. The results will show up, and you can drag and drop them all to whatever folder you want. – Keith Morrison Nov 21 '17 at 20:21
7

Just a little suggestion: You might also want the data entry keyboard to be totally different. The guy who is most responsible for the GUIs and mouse we used today, Douglas Engelbart, had originally developed a chord based input system where instead of having buttons for every letter the user had a single handed keyboard that used combinations to create letter - like chords on a guitar. It's worth looking into.

Graham
  • 20,584
  • 21
  • 93
RMH
  • 231
  • 1
  • 5
  • 1
    How would this stop GUIs developing? If anything, having a spare hand would seem to make a GUI more likely to evolve, because users wouldn't have the useability issue we all share of having to move one hand between the keyboard and mouse. – Graham Dec 06 '16 at 12:36
  • 1
    @Graham it would make text-based input continue naturally into the mobile age. If everyone had a bluethooth keyboard-glove on all the time, a terminal would be the most effective way of interacting with your phone. (FWIW, I'm typing this from a 10-finger keyboard, using vimperator to compensate for the problems of Firefox being GUI based...) – leftaroundabout Dec 06 '16 at 21:12
  • I didn't mean to imply that a cord-based keyboard would prevent the development of a GUI interface. I think that is inevitable but I thought the cord-base keyboad was different enough without being too radical to fit an alternative universe as described the poster. – RMH Dec 07 '16 at 14:25
  • 1
    @Graham You might be referencing this, but that was the original plan - one hand on the keyboard and one hand on the mouse at all times. Mouse for navigating, keyboard for data entry. – TessellatingHeckler Dec 07 '16 at 18:56
  • @TessellatingHeckler Yeah, that's the idea. There's a reason fast jets use HOTAS - it's simply the best ergonomics. The same principle for computer use is definitely an advance. Unfortunately we have always had a large user base with QWERTY keyboards (or AZERTY or whatever local variant) which made this impractical. As always, there needs to be a strong reason to change an established user base. The mouse was simply a better way to move a pointer than cursor keys, and better for fine control than a joystick. The chording keyboard didn't have enough incentive to displace QWERTY though. – Graham Dec 08 '16 at 11:23
6

Your link gives a clue:

The Xerox Alto systems, because of their power and graphics, were used for a variety of research purposes in to the fields of human-computer interaction and computer usage.

They built a GUI that is recognisable as the concepts still used today, and then researched human-computer interaction, which presumably just refined the ideas already raised, but more cynically may have justified the preconceived notions.

An early “bright idea” got funded, and directly inspired the major GUIs that appeared in consumer products.

Arguably, the ideas were ahead of the hardware and early implementations were inferior to what might have been.

If some different “bright idea” got researched, studied, and refined in the early days before commercial products, we might have gone a different route. In fact, a paradigm that was not so graphics intensive might have done better, sooner, before machines got powerful enough for the GUI to really be practical.

Then, if the general public had caught on to concepts that transcended “direct manipulation” and “what you see is what you get (what you see is all you got)” like was felt by the experts, then even when things got prettier the notions of direct manipulation (only) might not make the same inroads.

It would be cool to know what concepts / manipulation paradigm might have been developed that would be better than a plain CLI.

JDługosz
  • 69,440
  • 13
  • 130
  • 310
6

Well, you kind of kill it when you say that Norton Commander, Emacs, vi and friends don't count as GUI. At that point, there's hardly anything left that does count as GUI, perhaps just the visual fluff you get from high-resolution (e.g. more than 80x25 and such) displays.

So, let's assume that's exactly what you mean. No fluff.

Why do we get so much fluff? When it first comes, it has a certain novelty aspect. But that wears of rather quickly, and is actually quite discouraging to many users. Just look at all those examples like rounded corners, gloss, transparent windows and similar - you show them off for a generation or two, just to flex your muscles in front of a crowd of fawning fanboys, they get copied all over and used in all the wrong applications, and then the novelty wears off, and the fashion changes. Look at Windows 10 compared to Vista (all that gloss and transparency!), XP (rounded everything!). Windows 9/10 design is simple, clean, unobtrusive; a nice show of what remains when you get rid of the fluff.

So why do the graphics remain, rather than going back to text interfaces? The answer is actually quite simple - it makes a lot of complicated problems easier. Mind you, I'm not saying it's a panacea. It isn't. Text interfaces still have plenty of benefits:

  • Friendlier for remote terminals
  • Easier human auditing, with easy logging of everything that happens at the terminal
  • Easier showing of history in general
  • Easier composition of text-only applications (though this fades when any sort of "GUI" enters the equation, even in text-mode)

Now, of course, graphics had a head-start in applications that were, well, graphical. Computer-aided design. Publishing. It's not really a long list. Even today, some people can't stomach using a graphical interface for things as complicated as DTP - at best, they have a graphical window into what the layout is going to look like on paper (or what have you), while they do the actual editing in something like TeX, or even MarkDown or (gasp!) HTML.

Why did graphics win on the desktop in general? As noted before, text-mode applications still had great "GUIs", you still had full-blown integrated environments with all the cool things true GUIs give you, like keyboard shortcuts, menus, mouse control, hinting, all the nice discoverability.

Exactly because of those advanced users that everyone here is calling to the rescue. Why? Because there was no compatibility anywhere. Everyone did text-based applications their way. Even attempts at standardisation like POSIX, or even MS-DOS (which was designed to be quite a bit different than it actually turned out, mostly for - guess what - compatibility with IBM DOS, which got released slightly earlier) mostly failed. Even at the IBM PC (and its clones), where Microsoft quickly gained dominance, every application had its own idea about what commands should be named, what actions should do what, how to format their input and output data. Nobody tried to make common interfaces, formats. There was just endless arguments about who was better. There was no end in sight.

And then Xerox came with their revolutionary PARC. Now, mind you, this was tons of things that were utterly impractical when the research teams actually designed them. There were no computers powerful enough to run their systems, while also being anything close to affordable by any family, or really even corporations. But computers got powerful quickly, and everyone went to the well. Atari, Amiga, Apple, Microsoft - everyone adopted the same basic paradigms. Everyone also added some of their own, but those were also quickly spread in the new world - a world of inter-operation and compatibility. In no small part because the ones who cared about compatibility started winning. MS-DOS wasn't the best OS, not by far. Unless you cared about the fact that it run pretty much everything. You could take your applications from Dr-DOS, IBM DOS, and a few dozen other Something-DOSes and OSes, and run them on MS-DOS. Which OS do you buy? The one that has you locked-in to a couple of software packages, or the one that gives you pretty much all of them? Which OS do you design software for?

Windows weren't the first graphical OS, but that didn't matter anymore. The drive for compatibility was already there, and in full blow. Use a mouse to point at a button, press the mouse button, action happens. Every application on every system behaved the same. You had windows, you had buttons, you had scrollbars and menus - and there was a lot of pressure to unify their behaviour as much as reasonable, while still appearing somewhat different. And even when platforms differed (slightly), two applications on the same platform never did - something Linux still struggles with to this very day, with the misguided idea that it's the application, that should pick the GUI, rather than the user. What did "advanced" users do? They utterly and entirely ignored it, happy with their proprietary (funny, eh? :)) and incompatible CLIs. Advanced users are a lot more invested in their platform, simply because the invested so much time an effort in becoming proficient in that one platform. Advanced users are the bane of progress.

So the solution isn't to make everyone an advanced user, quite the opposite. Expect no effort from your users. Start with environments that try to standardise their interfaces - use the same keyboard shortcuts, naming conventions, formats. Think about accessibility, not just efficiency. Sure, ls is fine if you have a horrible keyboard or you can't type very well - but list is a hell of a lot more accessible. Use aliases if you need to, but even those should be conforming with other systems - you're not going to keep carrying your aliases over to other computers you need to use; just stick to defaults. Kick out anyone who doesn't play nice. Get rid of the hipsters, who not only can't recognise progress - they sneer at the very idea of progress.

A nice, compatible and mostly standardised interface will give you the inertia you need. Applications like Norton Commander, not command-line ls. Applications like Turbo Pascal, not vi. Search by wildcard, not regular expressions (but feel free to keep the advanced option!). Sort "by human", not "by computer" - Folder 100 should never end up in sort order between Folder 2 and Folder, deal with it. Learn everything the graphical OSes did right, and use it too. Don't consider remote terminals too much, even smart terminals - you'd never get a real interactive applications there - bandwidth is less of an issue than in a graphical application, but latency is just as horrible; in some cases even more so. Standardise rich terminals, streaming-text-only isn't good enough by far, and neither is just text positioning on a fixed background. Make it real smart, like what true GUIs managed to do.

Keep focus on freely integrated systems, rather than large proprietary bags of tricks (and no, keeping it "FSF" or "OSS" doesn't make it any less of a "large proprietary bag of tricks"). Have developers all over the world coöperate on what they're doing, rather than competing purely out of spite and other misguided initiatives. Find ways to engage users, improve their productivity, instead of arbitrarily introducing differences just to make conversion harder. Instead of ten competing packages "of everything", modularize - give users easy way to make choices without making things appear too complex. Remember how Turbo Pascal, despite being an IDE, actually allowed you to plug-in a custom linker, compiler, debugger...? Encourage that model. The company that's great at writing compilers isn't necessarily the best at linkers. Introduce productivity and discoverability features like auto-completion that mostly had to wait for GUIs in our history.

Does that leave us with all the problems solved? Almost. There are still things that graphics just does better. Layouting is much easier with higher resolution, resolution-agnostic design is much easier with higher resolution. Allow improvements over the text-mode ideal - for example, allow combining multiple "tile" sizes on one screen, so that you can e.g. have text written "as-if-in-80x25", while allowing other elements to be "as-if-in-80-40". Allow graphical elements to be included in a text-mode application - so that you don't have to keep changing the whole screen just to have a WYSIWYG look at your document, or to show graphs inside of a spreadsheet.

This is the truly complicated part - at some point, it becomes harder to justify that having two ways of doing fundamentally the same thing is a good thing; why have "hybrid" rendering on a Haswell machine, when you can render everything in graphics mode just as quickly, while keeping things simpler and prettier? Use accessories that can exploit extremely cheap low-resolution displays to keep better track of your whole system - or even give you cool graphical "pretend" interface in a similar way those Nintendo Mini-arcades had, without giving up on the benefits of text-mode?

Luaan
  • 4,035
  • 1
  • 18
  • 19
  • To be fair, emacs -nw is pretty definitely not a GUI, but a TUI. And many TUIs that process mice only do so because they, and terminals that allow the underlying program to interface with them, are widespread in our world. If systems never went fuily-GUI, it would be reasonable to suppose that such support would either not exist, or be an after-thought, or just be unused by most users. – mtraceur Dec 08 '16 at 07:25
  • Anyway, despite disagreeing on a few points and nuances, I +1'ed this. I think you touch on several good points about why GUIs developed how they did, what role the drive for consistency played, and some of the reasons why advanced users can be (though I wouldn't agree with "are") impediments to some forms of progress. – mtraceur Dec 08 '16 at 07:35
  • @mtraceur I don't think the distinction between a TUI and a GUI makes any sense; the only real difference is the resolution. There's no reason why TUIs wouldn't have mouse (or lightpen) control - that existed just as long as true GUIs - in fact, it was a lot easier, since addressing in text-mode is much easier (again, resolution). And of course, the real reason people bought mice was for Doom, everybody knows that :P The main difference is still between CLI and non-CLI - and while many "TUI"s have CLI integrated in some way (just like, say, Total Commander), they're not CLI. – Luaan Dec 08 '16 at 08:23
  • 1
    Why do you think mouse support is trivial? Do you know how the TTY/PTY (teletype/pseudo-teletype) subsystem in most operating systems works? If I write a relatively flexible TUI and the user runs it in their shell in their terminal, there's no guarantee at all that I'll even have any indication of what the mouse is doing - unless the terminal converts mouse interactions into escape codes or there's another non-standard API for accessing them from the terminal slave side. For the terminal environment, mouse support is a tacked-on afterthought kludge. – mtraceur Dec 12 '16 at 07:21
  • 1
    Tangentially, however, I concede that many modern TUIs have approached GUIs in flexibility and functionality, so in some functional sense, it's fair to concede that point. For instance, I'd describe irssi, a TUI IRC client (as far as I'm aware, no mouse support to speak of), as being functionally comparable to any GUI IRC client, minus skin-deep features like mouse-support. So you do have somewhat of a point there. – mtraceur Dec 12 '16 at 07:27
  • 1
    @mtraceur Well, that was always a problem of unix-like systems. It was never a problem of DOS, OS/2, Atari... because they didn't stick to the idea that you're controlling your system through teletype (a tech older than a hundred years now!). That's why I noted that advanced users can hold progress back - because they have a much bigger investment in what they've already learned, and shun new approaches to doing the same thing, just because it would make the investment a waste (to some extent). There's so many things already working with TTY that the inertia was too great. Not so on DOS :) – Luaan Dec 12 '16 at 11:19
  • @mtraceur And I'm not exactly a Unix expert, but I haven't really seen a good approach to solving that, even with all the effort put into virtual TTY extensions or (god forbid) X. Few even try to change it, and are treated either with indifference or outright scorn. Because everybody knows that teletype is the king :D Mind you, some holdovers from that era still exist even in the Windows/Mac world - we still have the old-school control characters, still have plain text I/O streams (as if there was such a thing as "plain text"!)... but mostly for backwards-compatibility, not going forward. – Luaan Dec 12 '16 at 11:21
5

Amazon Echo, Alexa, et al, are computers without a GUI. Heck, I even say OK Google to my phone to get it to do stuff like text my friend (Funny story: No matter what I said to my first cell phone with speech recognition, it always misinterpreted it..."Call mom", "Calling Brian". "Call Neil", "Calling Brian".) I predict that in 10 years we won't interact with a GUI as much as we talk to it or use "texting" (eg natural typing) for those times when talking would be rude (such as on a plane)

Tim
  • 2,752
  • 11
  • 20
  • Now try editing your photos using "OK Google". Not going to work. Voice recognition is nice as an input device, but that's all it is. If you need output from the computer - whether that's a list of things it's found, pictures or whatever - then you need a GUI of some kind. – Graham Dec 06 '16 at 12:47
  • 3
    @Graham I can tell you have never used a good TUI. You definitely don't need "a GUI of some kind" to get output from the computer. Check out for example Microsoft Works for DOS or Microsoft Word (available for DOS) or Norton Commander for DOS or PC Tools for DOS or any number of TUI products. – user Dec 06 '16 at 13:40
  • @MichaelKjörling My first DOS word processor was Word Perfect. Much better than Word at the time. :) I take your point that a text-based interface is possible to some extent - but only to some extent, and only for limited applications, and with greatly limited useability. Word Perfect's far-too-late entry into WYSIWYG was the direct cause of its failure. – Graham Dec 06 '16 at 15:46
  • I worked with developmentally disabled people for ~20 years. Its amazing when you see someone blind from birth navigating a gui better than you. Our interfaces for the blind are afterthoughts. Imagine if we had developed those interfaces first and developed GUIs as an afterthought. (Although, since humans are primarily visual I would never find a world without GUIs as believable.) – Tim Dec 06 '16 at 16:12
  • @Graham "text-based interface is possible to some extent"? You have not used bash. You have not used markdown. Oh, you have. It's in stackexchange. – noɥʇʎԀʎzɐɹƆ Dec 11 '16 at 00:06
  • @uoɥʇʎPʎzɐɹC Except that being a principal software engineer with a masters degree and over 20 years experience, I have. Mastery of the command line and scripting is a great tool to have. But it's a really, really crappy tool for anything visual, and long command-line invocations are often prone to error and slower than the equivalent GUI. It's like discussing how to best use a hammer and chisel to extract a screw. The right answer is that you don't, you get a bloody screwdriver. Interface for graphics? Get a GUI. It's the right tool for the job. – Graham Dec 12 '16 at 11:08
  • 1
    @AllOfYou - The OP asked how to make GUIs secondary, not non-existent. Obviously a GUI is easiest when editing graphics/photos (tough for Sports Illustrated to increase a model's bust with only text/speech). And obviously a TUI is easiest when doing highly repetitious tasks (like .BATch files or .PS1 scripts). Even in today's GUI dominated world a CLI can be quite useful and (frankly) preferable to an old DOS guy like me. A keyboard and mouse is really meant for someone with 3 hands IMO. Its a terrible interface, but slightly better than hitting tab 37 times to select the element I desire. – Tim Dec 12 '16 at 15:56
  • @Graham I agree, but the OP has little point – noɥʇʎԀʎzɐɹƆ Dec 12 '16 at 22:31
5

pre-1988: Xerox hires a brilliant legal team

1988: Apple files suit against Microsoft, and Xerox against Apple, same as real timeline.


Then a lot happens in 1989-1990:

Xerox wins or settles to their advantage, the patent infringement case against Apple. Then they join as plaintiff in the Apple-Microsoft look and feel case and win that too. [in the real timeline, Microsoft won the look-and-feel case in 1994, and Xerox lost theirs]

Additional Lawsuits related to Americans with Disabilities Act (ADA) infringement issues. Companies that developed early GUIs without accessibility features or automation capabilities settle or are found liable. Xerox escapes liability because their GUI never left the lab, and their legal team is awesome. Apple and Microsoft are liable for civil damages despite losing IP rights to Xerox. [in the real timeline, ADA rules have no teeth until 20+ years later]

New government regulations, riding on public opinion in support of the ADA requirements, make accessibility and automation capabilities mandatory on all software, and introduce federal education funding and standards for text-based computer literacy in the USA, quickly cloned in Japan and Europe.

Apple re-brands the MacIntosh as a toy and pulls out of the educational market.

Microsoft delays the launch of Windows 3.0 to remove features that infringed on Xerox's patents and add ADA compliance features. The resulting product is late, unusable, has no ecosystem support, a total flop which burns consumers and investors.

On Linux, X11R6 development stops for lack of volunteers and although you can find early versions, they have become illegal for lack of accessibility features and unmaintained (like DeCSS is today).


1992: IBM launches OS/2 and nobody notices. Same as real time-line


Finally, by 1995 GUIs are both academically and commercially dead:

Apple pivots to voice control as they continue to be a leader in User Experience, to compete against text interfaces.

Microsoft recovers from the Windows 3.0 fiasco by investing on a 32-bit version of MS-DOS to compete against a now GUI-less Linux.

GUI experience is now hazardous to your resume. Venture capital and research funding for GUIs dries up, like an extended version of AI Winter.

Tim Berners-Lee decides to focus on creating a free version of Gopher, abandoning work on HTTP/1.1 and X-Mosaic, so a GUI-based Internet never materializes.

Xerox kills all GUI research and never launches a product. They retain all patents even during bankruptcy, preventing others from launching a product.


So in this timeline there is a roughly 10-year period between 1985 and 1995 where GUIs struggle to gain popularity and ultimately fail on multiple fronts, a full 20 years before "modern general-purpose computers" come along.

Alex R
  • 1,226
  • 1
  • 8
  • 10
  • 1
    Linux wasn't a significant player in the desktop market even by the early 2000s; I started using Linux myself around '00-'01 (I distinctly recall using it in mid-2001) and while at that point the kernel was stable, the GUI was very rough around the edges. OS/2 1.0 was completely text-based (the first GUI was added in 1.1, and what you might call a modern GUI only appeared in 2.0). Apple's background at the time was in text-based interfaces (Apple II, anyone?). Windows 2.x was practically useful at least as an environment to develop against, but perhaps not as a stand-alone environment. Etc. – user Dec 06 '16 at 13:32
  • In this alternate timeline, Linux becomes popular enough to get Microsoft's attention. – Alex R Dec 07 '16 at 01:35
  • @MichaelKjörling Oh, I remember that so well. I've had to write my own drivers for almost everything - the mouse, the display driver, the network card... ugh. And the "reward" was X Window with horrible text rendering and barely working at all. Quite a cold shower after using Windows 3.11 and 98. And Microsoft was extremely savvy when they designed Windows to be embedded (that is, you could write Windows applications and sell them self-contained to people who didn't have Windows) - it wasn't really until Windows 3.X that really got people to use Windows as an interface of itself. – Luaan Dec 12 '16 at 12:05
  • Well, Linux (and other unix-like systems) got plenty of Microsoft's attention in our timeline as well, multiple times. It just never really paid off - we'll see how their latest attempt fares :) – Luaan Dec 12 '16 at 12:06
5

Have everyone in your world have bad to zero eye vision!
This will enforce the need for screen readers. Screen readers with GUIs are a real pain. It is much easier to only read text than describe a window for example.

Maybe this will have some more implications on your world, but it is definitely doable.

8192K
  • 509
  • 4
  • 8
  • 3
    Or the person/group that invented computers were blind. They invented the computer as a way of giving blind people an easier environment to work from and then "see-ers " caught on to how useful computers could be. – josh Dec 06 '16 at 10:36
5

There are a few general ways to make modern computers that are not GUI intensive.

Change Computer History:

This is somewhat of an obvious choice, because there were a few big pushes in computing that made the GUI happen. On our own planet Earth, computers became huge in the countries that won WWII and the cold war, A.K.A. Britain and America. This connects to a recent network question, "Why are all coding languages in English?". So, what's important about that? Well, America is a capitalist country, every company that hopped on the computer bandwagon created their own coding language. Just think about today, we have Haskell, C, C++, C#, Java, etc. For command line we have the Cmd on Windows, and terminal for Linux and Apple. But what if the government got more involved.

In 1965, America passes a bill that makes one American coding language, which will be used in all programming and command line. It will be developed in a similar project to the Manhattan Project, drafting the best minds in computer science, who all have to work together. All of a sudden, a huge barrier to entry is diminished, people only have to learn one new computer language instead of seven. The government also decides that they want the most powerful computers possible to run missile guidance systems, nuclear subs, etc. They don't have time for fancy stuff like graphics.

The drive for "a computer on every desk", never happens, instead the government puts a computer in every school for kids to learn. Now those kids grow up and buy their own computers, using nothing but command line.

Eventually, the technology is released to the public and a new company makes the GUI, but no one cares about that fluff, as it is in an alpha stage and is pretty crappy. It is seen as a dumb luxury like VR in the 90's and won't take off for at least another few decades, if ever.

Limit Computing:

As mentioned in another answer, the internet rules much of our life. And when bandwidth was low in the 90's we didn't send sweet memes, we sent ascii, or just words. If the bandwidth is limited, all of a sudden, images go away, the internet is text-based. Now, if you take away non-connected desktop, the government says all computers must be linked to the net at all times, there is no longer personal computing, the biggest factor is bandwidth. If bandwidth is limited, no GUI.

Limit People:

Not a great option, but if people are blind, GUI is unimportant. If people are colorblind, people don't like the way the GUI looks. It cannot convey as much meaning, so it isn't used. If people have no hands to use it, then they have to use voice dictation instead. In these cases, GUI is never bothered with.

EvSunWoodard
  • 268
  • 1
  • 6
  • 3
    "In 1965, America passes a bill that makes one American coding language, which will be used in all programming" That reminds me of COBOL ("created as part of a US Department of Defense effort to create a portable programming language for data processing") or Ada ("originally designed by a team led by Jean Ichbiah of CII Honeywell Bull under contract to the United States Department of Defense (DoD) from 1977 to 1983 to supersede over 450 programming languages used by the DoD at that time.") – user Dec 06 '16 at 15:29
  • @MichaelKjörling - that link is quite brings up a good point. There are different types of languages, but they really only boil down to three types: scripting, imperative, and declarative languages, everything else is just semantics, or ways and means. So, the government could make one of each. Also, the difference between what the U.S. did, and what I am saying is that the U.S. let other programming happen while COBOL was being made, if they banned other languages, it would make it much easier to make them converge. – EvSunWoodard Dec 06 '16 at 15:37
  • 1
    +1 for what I think is the key: early education. Imagine if instead of lessons in middle school on how to make PowerPoint slides, you got lessons on solving various problems using a Linux/Unix-like terminal/shell environment. GUIs would still happen, but the average person would grow up content never making the jump into GUIs, finding it very odd/unintuitive, the reverse of what we have now. – mtraceur Dec 08 '16 at 07:06
  • on limiting people: why not making epilepsy more common, so that people will stuck with less animated and colorful screens. – atakanyenel Dec 08 '16 at 12:50
  • 1
    Your analogy with programming languages is slightly flawed. "Terminal" is just a GUI frontend to whatever command shell you have set as your default. The shells themselves would be analogous with programming languages. So, sh, bash, dash, ksh, csh, tcsh, zsh, just to name the common ones. And the MS world has cmd.exe, command.com, and Powershell. – Ray Dec 08 '16 at 23:23
  • I don't agree that officially mandating a single programming language would make any difference to the ubiquity of computing, but I still upvoted because a large government education initiative in the early Cold War was the first thing I thought of to stop the GUI being born; if everyone is already learning to use text-based interfaces as part of their schooling, then the leap from mainframe to desktop can be made without needing a simpler interface. It might even be made earlier if people are more familiar with computers. – Torisuda Dec 11 '16 at 00:42
  • @mtraceur Mind you, we did have that. And it was just as effective as when they tried to teach us Excel, Powerpoint and what not - that is to say, it didn't stick. To some extent, you could say that this was because they've already used GUIs long before they got to those lessons in school, but by far the most important reason is that for most people, computer is the journey, not the destination. It's a tool to help them do the task they actually need done, not the reward. They don't think about the address bar in their browser when searching for... study materials... on the internet. – Luaan Dec 12 '16 at 11:55
  • I'd say that if you mandated one programming language for all of the USA, what you'd do is simply destroy the US as a major power in computer-land. Mind you, english might still be used (or not), but you just can't build when everyone is required to use the same tools. Just think of how much of a major power the US would be in crafts, if you could only use a screwdriver. Good luck if you're a carpenter, eh? :) – Luaan Dec 12 '16 at 11:58
  • @Luaan - The difference being that you can do everything, except art, on a text-based computer. It may be less intuitive, but it still works. GUI is basically just a bunch of shortcut buttons, there are still lines of code running in the background. So, it's like I said, as long as the computer was results oriented, we would build very powerful computers that were non-gui, because GUI would be a unnecessary flourish. Also, many computer scientists are of the mindset that the GUI was the biggest mistake in computers, because we waste computing power on graphics. Setting us back years. – EvSunWoodard Dec 12 '16 at 14:47
  • @EvSunWoodard But that's the same for CLI. The CLI is also just a bit of fluff for connecting the user interface with the code running "in the background". Of course, with the exception of systems like LISPM, which are true "CLI is code" - I'd much prefer that to the typical unix-y system with e.g. bash running C applications. But guess what? Unix rolled over LISPMs by being cheaper and "faster". Many computer scientists are very wrong; there's no waste of computing power on graphics - graphics don't dominate the power, and aren't used when they're not worth it (except by cargo culters). – Luaan Dec 12 '16 at 15:46
  • 1
    @Luaan - You are quite right! That is why the free market sells so many computers with GUI. I love my GUI. But, while you can argue that CLI is also a type of GUI, it is minimalist. And regardless of the truth behind many computer scientist's opinions, the perception is there! If some government had the same perception, then they might impose such laws as to ban wasteful, extravagant GUI, and push CLI instead. It's a way alternate history works, as long as there is a perception, there is a possibility, even if it isn't what happened. – EvSunWoodard Dec 12 '16 at 16:39
  • @EvSunWoodard The perception is indeed extremely strong, as many studies have shown. Not just for the "wasted processing power", but even for speed - most CLI users think they're much faster than when using a GUI, but it turns out that's not the case even if you invest a lot more in learning to use CLI than in a GUI. Using a mouse feels much slower than it really is, and using a keyboard feels much faster than it really is - most likely because the brain is getting good "feels" from doing so much work while bashing away at the keyboard, while mouse use feels "empty". – Luaan Dec 12 '16 at 17:15
4

How about an option that relies neither on crippling your people, nor on them consistently being irrational and/or unimaginative?

Make the displays expensive.

If a live (that is, displaying data as that data is created) graphics-capable monitor or a projector costs as much as a car or even a house, most families aren't going to be buying one. But businesses and governments could afford to purchase some that their artists, designers, engineers, and scientists can use to work with.

Most people would be stuck with printers, or possibly character displays possibly made using relatively inexpensive technologies such as flip-dot (or flip-segment), LED segments, or nixie tubes that, at least in your world, are unable to be shrunken down enough to make a useful desktop graphics display, but are sufficiently compact for a workable desktop character display.

This does, unfortunately, mean that live television is likely to never become mainstream. Movies however, should be fine, possibly even at home. Rather than showing them on a real-time graphics display like we do in the real world these days, just use a projector and film. The key characteristics of film being that displaying it is simple: just shine a bright white light through it with a lens to focus it, and that it lacks a fast write to read turnaround time, so it's unsuitable for live graphics. Television may end up more like an audio-visual newspaper or magazine subscription, with film delivered to your door on a regular basis, rather than a live broadcast.

For those wanting a print preview in their home, simply add an extra cartridge (or several, for colors) to printers, filled with dry erase ink. Bundle in some laminated paper, and there you go: print a preview with the erasable ink onto the laminated paper, look it over, then print the final result on regular paper with permanent ink while erasing the preview paper for reuse later.

8bittree
  • 236
  • 1
  • 6
  • 2
    The reason early PCs came with text-only monitors as standard issue was that MEMORY was expensive. You need memory to display anything on a raster CRT or also LCD display device unless the software is willing to constantly re-calculate the image. While a 25x80 character page of text fits a 2kbyte (or 4 kbyte with primary colors/underline/...) memory, a 720x384 pixel black and white image already needed almost 40 kilobytes! Given that 32-640 kilobytes were considered appropriate sizes for the main memory of a desktop computer in these days due to cost... – rackandboneman Dec 07 '16 at 08:57
  • 1
    @rackandboneman Right we did have television back then, too, but making memory expensive would hamstring the computers themselves. Better to make something exclusive to the monitors expensive rather than something used by both. – 8bittree Dec 07 '16 at 12:47
  • Even storing TV images was a horribly expensive and complicated business in the 50s and earlier. Video recorders sized like a big stove :) And that would be a really cumbersome kind of memory to use for computer output. The other alternative (and it WAS used in the 1960s and 1970s for computer graphics): Expensive, difficult to build and maintain CRTs (google DVBST CRT if you care) that you could literally tell to keep the image once written (needs to be completely rewritten to erase anything!). That technology is near extinct except for older oscilloscopes still in use. – rackandboneman Dec 08 '16 at 07:36
  • @rackandboneman My point was that making memory expensive makes computers expensive which appears to be against the OP's wishes. Remember, this is world building, so we're trying to build a world that isn't necessarily identical to ours. I'm suggesting that the OP make things which are needed by the monitors, but not by the computers themselves expensive. Possibly the construction process, or certain materials... or whatever. And at the same time, have some sort of cheap, text-only display available for everyone, even if that same technology is actually expensive in real life. – 8bittree Dec 09 '16 at 15:21
  • You can still make it about memory - just make your world's approach to inexpensive computer memory based on a technology that makes it usable for computation but makes it suck as a framebuffer (access modalities/protocols/timings for the memory play a big role there). – rackandboneman Dec 09 '16 at 15:37
  • And btw: Actual PC MONITORS even from 1982 (as supplied with the first IBM PC) were perfectly capable of displaying graphics - it was the image generating hardware in the computers that was limited unless upgraded. The monitors got sent an image that happened to be that of text, it was just generated without using a framebuffer. Mainframe/Multiuser systems were a different kettle of fish: These used integrated monitor/keyboard/image&text-generator solutions, often connected via a long and SLOW line - think phone&fax speeds, and it was often literal phone lines!, called "dumb terminals". – rackandboneman Dec 09 '16 at 15:44
  • http://terminals-wiki.org/wiki/index.php/Category:Year_of_Introduction has a lot more early computer interface history :) – rackandboneman Dec 09 '16 at 15:50
  • @rackandboneman Yeah, I'm aware of that. I've written drivers for a serial terminal, and, while maybe not quite 1982, I've written drivers for the 1987 VGA standard. But I'd find it rather hard to believe that you would be able to have multi-megabyte caches on your CPUs, with latencies in the tens of nanoseconds or less, yet not be able to create a framebuffer of a few hundred kilobytes with reliable timings at 10 Hz or so. You could go that route, but I'd personally find it more plausible to say that CRT phosphors are rare and expensive, LCDs are hard to build, OLEDs burn out too fast. – 8bittree Dec 09 '16 at 16:08
3

how can I reasonably explain that GUIs never became mainstream?

Computers entered the mass market at the same time as useful speech recognition and synthesis. Instead of sitting in front of screen and pressing buttons users primarily converse with computers. Which would make the concept of a GUI sound strange "What do you mean I have to learn to press this an that and then that? Why can't I just tell it what I want?".

papirtiger
  • 257
  • 1
  • 5
  • 2
    "So you can watch porn without everyone nearby learning about your midget fetish". :P – Faerindel Dec 07 '16 at 08:34
  • "You mean you have to use your hands? That's like a baby's toy" – TessellatingHeckler Dec 07 '16 at 19:02
  • Back when e-mail was sweeping thru society, I read a bit somewhere: Imagine (voice) phones being invented after e-mail. Today we'd all say "You mean I can just pick it up and talk to someone? No typing needed?!" – user2338816 Dec 12 '16 at 02:30
  • Okay, but how do you get useful speech recognition and synthesis without computers? The way we do it now pretty much required computers to be mass market - to get the required processing power and memory, to get the tons of training inputs and checking... – Luaan Dec 12 '16 at 12:01
  • @Luaan sorry for the extremely late reply - note that I wrote "Computers entered the mass market". We had computers long before anyone had one at home. – papirtiger Nov 02 '18 at 12:22
3

Make the computers interconnected and bottlenecked by bandwidth. A low-bandwidth internet forces one to optimize the transmission of content, which is likely text-based.

From my own experiences with the initial stages of the internet, a GUI is barely usable across a network when bandwidth is low enough. Even a GUI-system specifically designed for client-server networking, such as X, is bothersome on connections like a 14k4 modem.

Before the WWW existed we used the Gopher protocol to browse information systems across the world over dial-up connections. Then the WWW was invented and the internet became more graphical, performance on graphical browsers (Mosaic, Netscape) was still agonizingly slow. Since the textual content was still the main attraction many early users used text-based browsers such as w3m and lynx to browse the web. On linux servers successors like elinks are still used today.

If there was some reason for bandwidth to simply remain constrained then GUIs might not develop at all. People would likely still create ASCII-art and TUIs would improve, maybe supporting multiple windows like i3 window manager.

  • 2
    "...TUIs would improve, maybe supporting multiple windows like i3 window manager" - No need to compare to a GUI window manager like i3, we already have terminal multiplexers: GNU Screen and tmux. Also, regarding low bandwidth: VNC, X11, and RDP are not the only ways to interact with remote data using a GUI. You can run the GUI locally and just transfer the actual data. We do this all the time: see email, chat, web browsers (a lot of pages are still mostly text, sometimes with GUI controls). No remote pictures != no local GUI. – 8bittree Dec 06 '16 at 17:11
  • 1
    You could imagine bandwidth staying low because phone lines are a natural monopoly. The company that owns the lines is somehow corrupt or otherwise dysfunctional, and no other company can get the infrastructure in place to compete with them. – Ben Millwood Dec 07 '16 at 16:32
  • You can disincentivize running the GUI locally by centralizing computing power outside of homes – maybe software-as-a-service with thin clients are developed sooner than it was in our world, or maybe there's some reason why bundling everyone's hardware together in one datacentre is important – e.g. because it means you don't have to use the terrible telecom monopoly's cabling to network your stuff. – Ben Millwood Dec 07 '16 at 16:37
3

You want a world where computers are widespread but GUIs don't exist? Simple: Find a way to make a world where everyone is totally blind - perhaps even where eyes were never able to evolve. (writing uses some equivalent of Braille)

PMar
  • 31
  • 1
2

Educate the public quickly

GUIs are popular because they're easy for new users to learn, and don't require as much specialized knowledge as using a CLI. For example, to change file permissions through the GUI in Linux, you can click little check-boxes labeled "read", "write", and "execute", while to change the same information with the CLI, you need to remember which bits correspond to which permissions, and do a decimal to binary conversion.

If, for some reason, computers classes became a part of compulsory education during the time when CLIs were still popular, an entire generation would grow up using them. When GUIs emerged they wouldn't seem to have much of an advantage over CLIs to the public at large. Further, CLIs - especially whatever shell(s) taught in school - would have the inertia of consensus, and people would be unwilling to change.

Charles Noon
  • 203
  • 1
  • 7
  • CLIs did have the inertia of consensus. On some platforms, they still do. Most users knew them better than Excel users know Excel. That didn't stop them from disappearing. Everyone knew how to use CLI - and then Norton Commander (and friends) came and 99% of computer users dropped CLI, just like that. The only places where it survived was with 1) remote systems, where it was much faster, or an interactive interface simply wasn't available, 2) automation, especially for corporate/academical infrastructure, 3) hipsters (before it was cool!). – Luaan Dec 06 '16 at 20:46
  • 1
    CLIs had the inertia of consensus among people that used computers during the time that CLIs were the only option - not a lot of people (comparatively). GUIs became popular around the same time personal computers became popular. Most people that used GUIs learned with GUIs - those few that started with CLIs may or may not have switched, but they're the minority. That's my take anyways, and I'm no expert. – Charles Noon Dec 07 '16 at 00:24
  • 1
    That probably depends a lot on what region you're talking about. Where I'm from, people used text-based interfaces all the way up to Windows 95 or even longer for the most part; they still switched as soon as they could. And a 486 machine cost on the order of $10-20k in today's money - way more expensive than a new car at the time. And everyone still switched as soon as they could, the major exception being universities, which propagate CLI almost exclusively to this day :) – Luaan Dec 07 '16 at 08:33
  • That's really interesting, and suggests that GUIs have an inherent advantage over CLIs - at least to most. Perhaps an element of snobbishness could work, if enough people grew up with CLIs? Maybe OP could make a world full of the aforementioned hipsters... – Charles Noon Dec 08 '16 at 01:34
  • @Luann You talk about Norton Commander as if it signals the inevitable transition of CLI->TUI->GUI. And yet, my father, who only got into computers in the 1990's shortly after the Soviet Union ended, was still choosing to do the majority of his day-to-day tasks in Far Manager, full screen on Windows XP and later. I've lived on my own since 2009, so I'm not sure how much he still uses it. Meanwhile, I grew up on GUIs and spent years not getting why someone would want to do that, yet in the last few years I've been switching all of my computer tasks to CLI/TUI as quickly as I've been able to. – mtraceur Dec 08 '16 at 07:19
  • @mtraceur No, that's not what I'm implying at all. I'm saying Norton Commander (and friends) is already GUI, just a low-resolution one. There's barely any distinction between what you call "TUI" and "GUI" besides the higher resolution (and hand in hand, removing the monochrome tiling). The main thing is that "TUI"s have stagnated to some extent, especially on Windows, while "GUI"s kept developing further. I'm mostly using Total Commander which is essentially NC, but with drag and drop, clipboard, shell extensions and a much bigger area for content. I never clicked a button in TC - still GUI. – Luaan Dec 08 '16 at 08:10
  • @Luaan, the key distinction remains: one works in an environment that renders text, the other depends on environments that can render arbitrary graphics. Therein lies the boundary, hence the meaningful distinction of the terms. I'll happily agree, however, that they strongly demonstrate the lack of substantial usability/functional difference. Still, I do most of my file-browsing and manipulation in a Bourne-like shell, most of my text-editing in vi. Even if you lump TUIs with GUIs, for many tasks, I'm just happier with CLIs than GUIs. – mtraceur Dec 12 '16 at 07:14
  • @mtraceur Well, "happy" doesn't really compare well. It's mostly about familiarity, rather than which is really better for a given task at hand (and mind you, of course there's a huge boost to "subjectively better" if you already have a tool you understand, even if it's not "objectively" a better tool). If you like working with CLIs, more power to you - I don't; and not for a lack of trying (I've worked exclusively with a text unix-ey system for a few years, it didn't stick). And sure, vi is fine - but how different is it really from something like Atom or VSCode, apart from remote? – Luaan Dec 12 '16 at 11:15
2

If the users are non-human, a GUI interface may present serious issues. Maybe they have compound eyes, like insects, and any sort of pixel-grid display creates serious moire fringing effects between the screen and their eyes. Or maybe they see in sonar, like bats or dolphins. How do you make a sonar screen?

If they are (almost?) human, maybe their society is a strict meritocracy (with fascistic overtones). You are not allowed to access a computer until you prove that you are intelligent enough to use one in an intelligent manner. In other words, program one. By the time you are a half-decent prgrammer, you will probably prefer a command-line interface over a GUI interface for most tasks in any case.

(If you are any sort of geek, you'll have heard the jokes about lusers and drool-proof keyboards. In this world, the geeks are the rulers).

nigel222
  • 11,701
  • 21
  • 33
2

I don't think this is possible, if you want to keep the possibilities of modern computers, especially if you consider Norton Commander as 'text' - since what it's really doing is abusing text to be a GUI - and most of what GUIs do is position text, outside a grid system. But one possible approach I haven't seen mentioned in other answers - text is machine readable, GUIs aren't.

This could come up in several different ways:

  • Mandatory software quality testing, coming in very early on. As soon as the first software with bugs appear, and companies realise they are paying for broken products, particularly if there is a serious catastrophe like an exploding space rocket, there is a big legal and regulatory push for software to be absolutely as described, with large fees for any bugs found.

    • This manifests itself as precise specifications for input and output, and mandatory automated testing with regulatory oversight. You can automatically verify the text which is displayed, and the screen output at every state, but you can't easily automatically verify the display of a curve, and the number of possibilities with user resizable windows makes it infeasible to attempt.
  • Mandatory auditing of one sort or another. All input and output must be audited for anti-fraud, or to guard against anti-consumer practices, or to mandate that computer systems from different providers perform the same way, or as a basic expectation in a digital society of how computers behave. You can audit typing and printing, but you can't really audit mouse clicks and GUI scrolling in the same way. You can audit "this picture was displayed: {}" for use with your one-off output specification, but you wouldn't want the overhead or storage costs of auditing every frame of a GUI.

  • The earliest developments of computing were very focused on interpreting the text, and processing it in custom ways. e.g. government broadcast news over a text feed like the UK's old Ceefax systems, and individual people put keyword matches on the data stream which would alert them for things they found interesting. Businesses alerted on transactions, individuals played with data sets in real time - you could expect a feed of special offers from shops, from weather services, from news services, civil engineering (roadworks) in your area, up to date electricity prices, or whatever, and pick up on the things you care about. This happens early enough in your timeline that it gets embedded into the culture, and when GUIs come along, people regard them as a novelty but ultimately reject the way they can't be automated and pattern searched as too limiting, so only use them as an output device, but not as the main interaction point. You work with the structured data, maybe you show it in a GUI if it's a graph, or maybe you don't.

  • The previous points interact; mandatory auditing means governments want a continual stream of input from every user, which they can search and gather population-wide statistics for, which means GUIs are only allowed to be used for display, but all input must come through a keyboard.


The section of Mandatory software testing could come up in another approach, the reason headless servers are so popular today is that less code means a smaller attack surface for security considerations. If all software had to go through an expensive regulatory audit process (or any constraint which has a similar effect - software companies need to be insured against the risk of their code going wrong, and insurance companies charge per line of code insured, or per feature), then 'less code' would push industries towards preferring TUIs if at all possible. Since a GUI has to display text, and also graphics, it will always work out more expensive.


Another possible deviation from real world history is that our early output devices were RADAR screens and oscilloscopes, with an electron beam being scanned left to right and modulated up and down by an analog signal. They became CRTs, which were the dominant display technology for many years.

But what if CRTs couldn't become dominant, e.g. if regulatory limits prohibited vacuum chambers in devices sold to the public, because they were too dangerous due to the risk of implosion?


Environmental concerns, or financial rent-seeking behaviour. If you could tweak the world so that displaying a picture cost significantly more, each time, people would avoid it for normal use. e.g. if there was a 'text' screen which came with a computer, and you could buy a 'graphical' screen as an addition to go alongside it - but it could display 1000 graphics before the license ran out and needed renewing, or it cost a day of text electricity to update compared to the text screen. The market would sort out how to do everything by text, while keeping GUIs available for the occasional use, or for the wealthy.

TessellatingHeckler
  • 1,918
  • 1
  • 12
  • 12
1

Do a better job of teaching kids to read and write.

Let's draw a line between a system that is capable of doing graphics, when appropriate, and the GUI, which is to computing what "point & grunt" is to language. So your computer user has what I have on my machines (4 on or beside my desk at the moment): a window manager running on top of X, which mostly has a bunch of xterm windows on it. To interact with the computer, I use language in the form of commands, rather than pointing at something and clicking the mouse.

Now this doesn't mean I can't do graphics. I can do anything from looking at photos I've downloaded from my camera (with text commands) to viewing PDF documents (which I may have created with text-based LaTeX) to visualizing the output from the 3D seismic tomography program I'm working on (the input to which is text). I just don't have to have an icon that I click on for every single thing I want to do, and I don't have to waste time trying to figure out what those icons - potentially multiple thousands of them - are supposed to mean. (If I run into an unfamiliar text command, I can look it up in the manual or with a search engine, just as I would look up an unfamiliar word in a dictionary.)

If I need a list of commands for users not familiar with a system or application, I can use text menus, as in fact I do with the browser (qupzilla) that I'm using at the moment. It has some GUI icons. in a bar across the top, but I've never figured out exactly what they mean, because there's a handy text menu too.

GUIs, IMHO, are basically a crutch, needed because a large fraction of the population seems to be functionally illiterate.

jamesqf
  • 20,460
  • 1
  • 37
  • 76
  • That's certainly consistent with newspapers and managers which insist on communicating via videos, rather than text. – Arlie Stephens Dec 07 '16 at 02:04
  • More seriously - increase the prevalence and popularity of the personality traits which produce bookkeepers, librarians, computer nerds etc - at the expense of those which produce sales people, politicians, and entertainers. Make "geek" a compliment. Make "perfectionism" and "expertise" more desirable than "quick hacks" and "flexibility". A modern GUI is after all a way for an unskilled user to manage a task, that they'll never be able to get any better at. – Arlie Stephens Dec 07 '16 at 02:08
  • 2
    Many, many, highly literate people (eg. people with literature PhDs, established authors, academics, etc.) have little or no expertise with computer interfaces. It seems extraordinary to me to draw a link between them. – Ben Millwood Dec 07 '16 at 16:29
  • @Ben Millwood: Because GUIs came along before most of those people were exposed to computers, and then they were force-fed the GUI by Windows and MacIntosh, so they never had a chance to experience how much better a good CLI can be. – jamesqf Dec 07 '16 at 19:45
  • @BenMillwood I agree it doesn't strongly correlate with illiteracy in natural languages. But, what can you do with a mouse? You can point at things, and then express a few different intentions (left-click, right-click, maybe middle-click or wheel-click or even fourth-mouse-button-click, etc). That's the expressive power of point-and-variously-grunt. With gesture support, you can wave your hand. It takes keyboard actions/modifier to get any more flexibility. A good CLI has the expressive power much more comparable to a language. It is illiteracy, just in a completely foreign computer language. – mtraceur Dec 08 '16 at 07:43
  • OK, but that still means "Do a better job of teaching kids to read and write" is not the solution, because an inability to read or write is not the problem. – Ben Millwood Dec 09 '16 at 04:33
  • @BenMillwood The problem of point-and-click GUIs would probably not exist if the adult population back when computers were first exposed to the ignorant masses ( :P ) had been more literate. But today, yes, the reason why people should be more literate probably has little to do with GUIs. – Nobody Dec 09 '16 at 15:38
  • @Nobody: Nope. Literacy rates in the Western world were already at 98-99% by the 1970s. Which is another reason this answer isn't plausible. – Ben Millwood Dec 10 '16 at 13:35
  • 2
    @BenMillwood 98%-99% of people who, when pressed, can read a short easy text with a little effort. That includes functional illiterates, depending on your standards. Even today the numbers are (much) worse than that if you define literate as something like "Immediately understands all text encountered in daily life without any conscious effort; understands the gist of major works of literature with little effort; understands the gist of contracts they sign with amount of required time and effort in a sensible proportion to the importance of the contract." – Nobody Dec 10 '16 at 14:04
  • @Ben Millwood: Yes, as Nobody says. You might look at the number of products which provide instructional videos, even though for a functionally literate person text does a much better job of conveying information at a much lower cost. Or consider why so many more people watch TV & movies, even though books generally do a far better job of telling a story. – jamesqf Dec 10 '16 at 18:41
  • @Nobody though keep in mind that "literacy" and "functional literacy" can mean essentially anything that a writer wants them to mean. Somewhere out there, someone is labeling people who struggle to analyze Shakespeare at a university level as "functionally illiterate" in order to support their request for another five million dollars of grant money to build Adult Reading Centers. – Robert Columbia Nov 15 '17 at 03:41
  • @RobertColumbia I was trying to explain which level of literacy I thought was necessary for computer interfaces to evolve differently. Whether you call people below that level functionally illiterate or whatever is a completely different question which I wasn't trying to answer. – Nobody Nov 15 '17 at 21:29
  • Funnily, a lot of programmers I know are functionally illiterate. – idrougge Nov 20 '17 at 19:26
  • @idrougge: You must associate with a different group of programmers than I even have. (Apple or Microsoft, perhaps?) Pretty much all the ones I know read books, and not just programming manuals. – jamesqf Nov 22 '17 at 02:14
  • No, I'm mostly thinking of the bearded ones who only read Unix man pages and pulp science fiction. – idrougge Nov 22 '17 at 09:27
  • @idrougge: Pulp science fiction comes in books, and one has to be literate to read it. (And to appreciate it often takes a higher degree of scientific, technical, and anthropological literacy than other sorts of writing.) It may not be to your person taste, but that's quite a different matter. – jamesqf Nov 23 '17 at 04:14
  • @jamesqf Especially if thats everything you ever read. – idrougge Nov 23 '17 at 08:42
1

Try and look at 'what' you actually want to use computing for. Will everyone still be as connected as they are these days? if so, could they just be using more powerful versions of the early mobile phones which had buttons and an LCD screen (my old Ericsson A1018 was like this.) Or are you looking more for a computerized world, but without necessarily needing the level of user input we have now?

I mean for instance, look up 'internet of things'. The basic concept is everything around us now has a computer in it (kettles, toasters) which are all inter-connected to form their own network. However, the micro-controllers within them fairly rarely have a GUI. At most, there are a lot of blenders/food processors which have buttons on them for 'smart' cooking. These are dedicated function buttons, while the micro-controller inside simply (or not so simply) reads the data from a few sensors and applies some logic to the cooking mode.

The Raspberry Pi is another good modern example. Although it is typically connected to a mouse/keyboard and TV/monitor, it needs none of these things to function. I've seen them set up as wireless computer servers; one of my colleagues has half his house automated with micro-controllers, including wifi cameras and his 3D printer, all connected through the Pi as a server. He can access his printer at work, and watch it on the camera to make sure his house isn't on fire, but the point is the Pi itself has no GUI, and the tablet or whatever he uses to access it isn't more than a dumb terminal.

IF you're talking purely about how to access the computer without the graphical interface, then the next level up (or down) would be the old DIP switch and jumper approach to computer programming/usage. I have an early Amstrad PPC512 laptop at home which consists of a monochrome LCD screen, two floppy drives, a modem and no hard disk or any sort of operating system, other than what is used on the boot floppy. Setting which floppy, or external monitor source etc. was done with an array of DIP switches on the side.

There are plenty of other good examples through computing history: the Apollo computer used during the moon landings had the DSKY interface, which was fitted with dedicated function buttons (noun, verb) and 7-segment readouts. Graphics calculators would be another example you could 'borrow' and modernize.

TLDR: Your world started with pre-GUI computers such as the Apollo guidance computer. Instead of the desktop computer/monitor becoming standard, research instead went into portable computers such as graphics calculators and early mobile phone technology, while industry focused on single use computers programmed by DIP switch. By the time mainstream internet became available linking the IOT devices together, people still predominantly relied on text-based systems like their button phones.

Something a little less anachronistic would be that, or haptic feedback devices (vibrators, or braille keypads) were invented sooner. Maybe AI was developed earlier, reducing the need for 'hands on' computing, although this begins to overlap the voice-activated approach as mentioned in a previous post.

Nathan
  • 11
  • 1
1

Are you bound to the users being human-like? If the user's senses are not dominated by vision, you can neglect GUI, and go more on a tactile/sound/smell user interface.

Basically you can image a mole-like being using a computer.

L.Dutch
  • 286,075
  • 58
  • 587
  • 1,230
1
  1. Before computers are powerful enough for graphics, heavily invest in computer science education, starting from primary school. This would likely be a sound investment anyway, at the very least in hindsight.
  2. Everyone will be able to use a terminal. You can't teach theoretical computer science to first graders (also large parts of it wasn't known back then), you'll start with a very practical approach to computer science, which implies heavy use of actual computers; programming. That's the part which is useful to the general population anyway, so they can automate little problems in their daily life/workplace.
  3. Everyone will be able to use a terminal more efficiently than graphical programs because they already know how to and terminals are inherently better so the investment to learn GUI wouldn't be worth it.
  4. There would be no need for graphical user interfaces.

That is, there would still be graphical output, but only for stuff like previewing 3D models you describe textually (it exists! It's really easy to learn and powerful in my opinion), previewing documents you wrote in something like LaTeX, viewing pictures and videos, etc.

Nobody
  • 2,138
  • 10
  • 22
  • Computer science and computer usage are mostly unrelated. – Raphael Dec 09 '16 at 07:53
  • @Raphael Obviously, at least in one way. But at the same time, computer science implies programming and programming implies being able to use a computer for programming, and being able to program programs for existing computers. That is, you'll be able to both use a terminal and make programs which run in terminal. Now explain me why anyone with that background would use an early (probably shitty) attempt at a GUI, or even a modern one which used (wasted :P ) millions of man-hours during its creation. – Nobody Dec 09 '16 at 15:19
  • "computer science implies programming" -- not necessarily, no. Not anymore than physics implies welding. – Raphael Dec 09 '16 at 18:46
  • @Raphael I don't care about far fetched philosophical implications. Sure, CS isn't equivalent to programming, that's not what I was saying. But if you study CS then you'll write lots of code. No way around it. Hell, if you study physics, you'll write lots of code too, though less than the CS students (welding, on the other hand, is definitely not on the curriculum at least where I study). If you want proof, check the first year CS curriculum at any large university. Or check this out: http://www.vvz.ethz.ch/Vorlesungsverzeichnis/lerneinheitPre.do?lerneinheitId=109028&semkez=2016W&lang=en – Nobody Dec 09 '16 at 18:59
  • "But if you study CS then you'll write lots of code" -- maybe, but not necessarily. I personally know a number of counter examples. You are right to say that it's not the norm, though. Anyway, I apparently have to make my point clearer: you probably want to propose teaching computer skills, including programming, not computer science. That's just similar to teaching physics being the wrong course of action if you want people to solder instead of glue. – Raphael Dec 09 '16 at 20:41
  • @Raphael That's exactly what I'm not proposing. What they actually should learn are the important concepts of computer science. You can't teach concepts alone (if you do it to university students, they'll find your course boring unless they study mathematics, if you try it in primary/secondary education, they just won't get it). So guess which applications (something which children can do in schools and as homework) go well with CS concepts? How would a representative sample look like? Do you really want to tell me that doesn't include programming? – Nobody Dec 09 '16 at 21:57
  • Certainly, in the same way you can apply physics by playing with building blocks. Doesn't change that, in my experience, the overlap between the groups of people who are interested (and get) the important concepts of CS and those that are avid programmers overlap less than most people think. As a matter of fact, computer-/programming-centric CS school ed scares many young people away from the field. – Raphael Dec 10 '16 at 14:31
  • @Raphael What about you just tell me what you think a representative sample of appropriate concept/application combinations for young CS students would look like instead of making strange physics comparisons which tell me nothing more than that you disagree? How do you teach, say, algorithms and data structures? Which I hope you agree are important CS concepts? – Nobody Dec 10 '16 at 14:39
1

Most of these answer focus on technology being held back, I am going to assume it sprints forward. Direct communication with the computer via brain waves over wires invented before GUIs.

If you use telepathy or neural implants to communicate with your computer no keyboard, mouse, GUI or etc are necessary. You have a direct brain to computer link with vastly superior reaction time.

The only possible problem is people might choose to visualize a GUI in their mind. However, I doubt that it would be helpful with direct computer to brain linkage.

cybernard
  • 2,756
  • 9
  • 6
  • I doubt that people would avoid using graphical representations with direct brain-computer linkage. Sight is by far dominant in humans, that's why GUIs work in the first place. Even thinking about problems in my head involves visualising things "as if in sight". Even thinking about CLIs, I have an image of a CLI in my head. In fact, I picture myself bashing on the keyboard right now :) – Luaan Dec 07 '16 at 08:39
  • @Luaan I my world keyboards were never invented because of the brain link so you don't know what one is so you can be picturing yourself bashing one. – cybernard Dec 07 '16 at 12:47
  • 3
    Computers didn't invent keyboards. Keyboards existed long before computers. Are you saying that people went straight from drawing by hand on a piece of paper to brain-computer interface? Why are there no typewriters? Why are there no pianos for that matter? No printing press? Even then, I'd simply be picturing myself handwriting, instead of bashing the keyboard - it doesn't really change much on the argument :) – Luaan Dec 07 '16 at 13:08
  • I am saying in the person question they are in an alternate reality, since we already have GUI. Pianos, and etc can have keyboards, just not computers. Since you can think faster than you can speak,type,click, or write the neutral interface would be the dominate way of getting things done. A GUI would just slow you down. – cybernard Dec 07 '16 at 23:18
0

Obviously, you need to keep computers expensive. No cheap microprocessors means no cheap microprocessor-driven displays, and probably no multicore GPUs. Go back to the 60s vision of terminals in every home connecting to a central computer. Keep bandwidth low so graphics are mostly a non-starter (remember waiting minutes for web pages to download in the 90s?). Maybe have the local computer utility be something like a library, with lots of public-access terminals and some high-speed printers so people can print things out and take them home. I remember in college, the terminals in the dorm terminal rooms ran on 2400baud leased lines. When you went onto campus, the terminals were on 9600baud direct lines, it was a major incentive to do work in campus labs.

TMN
  • 1,119
  • 7
  • 8
0

Check this out: https://en.wikipedia.org/wiki/HAL_9000

HAL 9000 is an artificial intelligence. It is true that HAL has monitors to display things. But the "Interaction" part in GUI is verbal.

Only when HAL breaks down and loses his verbal communication abilities does he need keyboard input, which Dr. Chandra performs.

sampathsris
  • 709
  • 4
  • 11
0

Another possibility would be patent laws that weren't prepared to handle technology. A scenario where patents were a bit more broad than today, and had a period of 50 or more years, could allow somebody to patent the GUI display, movement-based interaction with a computer, and/or some other aspect essential to GUIs or GUI interaction.

To protect their brand, the owner of the patent prices the patent license outside the range of most consumers. Instead, they license it at exorbitant prices to industries that absolutely can't function without it. Perhaps they sell computers with GUI capabilities, or perhaps they sell software, or they sell a software toolkit for building software, or perhaps some combination like Apple.

The laws have since been changed to avoid that sort of problem, but they can't retroactively invalidate the patent.

TheBlackCat
  • 3,775
  • 2
  • 12
  • 24
0

If we're talking humans, they will certainly come with ideas for GUI; using out eyes and hands in combination is one of your evolutionary advantages. We are so specialized for this and have been drawing things for thousands of years, to there is no circumventing the idea.

So you have to find ways to prevent these ideas from execution.

One possible angle is energy. Computing visual output and actually displaying it takes lots of energy. It's probably what most user-facing devices out there use the most energy for, even with modern display technology.

So if energy is, historically, much more expensive than it has been in our world (maybe fossil fuels were not a thing, or we caught on early and stopped using them as much?) then energy efficience becomes a factor in designing computing devices a lot earlier than it has in our world. Once text interfaces are ubiquituous, GUI has probably little traction outside of specialized applications.

Make sure to invent something else but screens for output. Our text interfaces still need one, and the more useful text interfaces use colors, too.

Raphael
  • 876
  • 6
  • 9
0

Kill Doug Englebart before 1968, and blow up SRI's automation lab. To make doubly certain, ensure that Alan Kay is never born and drop a tac nuke on Xerox PARC. You'll also need to do something to reduce programmer productivity so that projects such as the ones that led to GUIs will never be undertaken - so ensure that computer programmers are federally licensed and regulated, programming tools require a license to purchase and the costs are insanely high, outlaw the production of any computer smaller than a mini-van, and make COBOL the One True Sanctioned Programming Language. (Sorry, Grace, but by today's standards it's a pig...). That oughta do 'er..!

-1

Kill Windows

Have the company that invents GUIs to make their name worse than mud, for example by them planting spyware in the GUIs. Then GUIs are linked to the name of the most infamous company in the world. Also point out that GUIs use a lot more CPU power, RAM, etc. than TUIs. The combination of these two factors (no GUIs and more available hardware for your software (which is not being eaten by your OS (which actually is just running smoothly in the background, only using 0.5 k of RAM))) is a lot more effort going into making useful stuff, like voice recognition, AIs and other stuff like that.

Mark Gardner
  • 2,884
  • 1
  • 18
  • 32
  • "Windows"? Anti-microsoft chip on shoulder detected. Computers had GUIs (1973) before Microsoft existed, and it was 10 years after Microsoft was founded until they started on Windows. And they can't really use "a lot more power" if 1970s computers could be used for GUIs, but 1990s IBM PCs were often used for DOS only interfaces, the twenty year older computers just weren't a lot more powerful. – TessellatingHeckler Dec 07 '16 at 19:57
  • @TessellatingHeckler Windows 10 - spec pc-s without windows are much better than 1970 computers – Mark Gardner Dec 08 '16 at 15:39