“11001001”
Written by Maurice Hurley and Robert Lewin
Directed by Paul Lynch
Season 1, Episode 15
Original air date: February 1, 1988
Star date: 41365.9
Mission summary
Enterprise reports to Starbase 74 for its 30,000 light year service inspection and upgrades, to be performed by Bynars, aliens who work in linked pairs and are connected to a central computer on their homeworld. Commander Riker doesn’t completely trust them, but he’s content to leave Wesley to keep an eye on them after the alien computer whizzes modify the holodeck and program up his dream girl: a woman named Minuet who tells him exactly what he wants to hear.
Picard is not immune to the unexpected charms of Riker’s sophisticated, artificially-intelligent holodate. He hangs out with them in a recreation of a New Orleans jazz club while Data copes with a rather urgent emergency: The computer reports that the antimatter containment field is failing. Unable to contact either of his superiors, he orders a full evacuation and sets the ship’s autopilot to send Enterprise a safe distance from the starbase and any inhabited planets in case it goes kablooey.
Once the ship is clear of spacedock, the magnetic containment field miraculously regenerates and Enterprise sets course for Bynaus, the Bynars’ homeworld. They were played! When Picard finally leaves Riker and Minuet to do whatever a man and a computer-generated woman do together, they discover the ship is on red alert and has been abandoned. Prepared to destroy the ship to keep it out of enemy hands, they return to the Bridge, where they discover their hijackers are unconscious and dying. Minuet confirms Picard and Riker’s suspicion that the Bynars stole Enterprise—for all intents and purposes a mobile computer—and used it as temporary backup storage for their planet’s data, to protect it from a power surge from a nearby supernova. But while the system is down for unscheduled maintenance, the Bynars are on the brink of death.
With the danger past, Picard and Riker work together to find the correct ZIP file in time and restore the computer on Bynaus. They return Enterprise to the starbase, where the Bynars will face legal action for stealing the starship. Riker hurries back to the holodeck to continue the best night of his life, but Minuet is gone.
RIKER: She’s gone. I tried variations of the program, others appeared, but not Minuet.
PICARD: Maybe it was all part of the Bynar’s programming. But you know, Number One, some relationships just can’t work.
RIKER: Yes, probably true. She’ll be difficult to forget.
Analysis
This episode is at turns fascinating and disturbing. Setting aside the uncomfortable implications of Riker’s infatuation with Minuet for a moment, I’ll say that I’ve always liked this episode, and that hasn’t changed. The Bynars are some of the more intriguing aliens we’ve seen in Star Trek thus far, especially in TNG, however simplistic their development might be. It’s particularly interesting to re-watch “11001001” in 2012, versus 1988 when it was first broadcast, or even 1991, when I probably saw it for the first time.
While a Star Trek episode like “The Ultimate Computer” cautioned against advancing too far and relying on machines too much, this TNG episode illustrates a symbiosis between organic beings and technology and avoids being too judgmental about it. Riker displays some uncertainty and suspicion over their work, but not without good reason. Despite the obvious—perhaps too obvious—flaws in having a central computer running your planet, intricately linked to everyone who lives there, there’s no moralizing or criticism of the way these lifeforms have evolved. Even the Bynars are quick to point out there are some drawbacks.
Considering the incredible usefulness of an advanced computer like the one aboard Enterprise, and the fact that one of its crew is an android, TNG is much more progressive and accepting of technology than its predecessor. Looking back at it today, the connection between the Bynars and their own version of “cloud” computing is eerily prescient. Our ubiquitous smartphones are not much smaller than the communication boxes they wear, and even our language is adapting and becoming a kind of shorthand, largely thanks to text messages and Twitter. The internet is already a kind of hive mind, and we will only rely on it more as time goes on and we become even more connected than we are now.
Although this is essentially another “aliens take over the ship” episode, their plan is cleverer than most and doesn’t rely on the crew being too stupid for a change. The actors really seem to be settling comfortably into their rules at last, with a better idea of who their characters are. There’s also a nice cinematic quality, thanks to using stock footage shot for the cinema, repurposed as Starbase 74.
However, it might have been more engaging and believable to see the “natural” Bynars surface when their computer is offline, perhaps making them miserable and less intelligent, instead of killing them. I mean, I get pretty cranky when I can’t get a good Wi-Fi connection or my phone freezes. And while I liked seeing the overall competence of the crew—from Data’s quick, practical command decisions to Picard and Riker’s thoughtful attempts to regain control of the ship and discover what was going on—this time I was aware of a glaring flaw: The Bynars seemingly crafted their puzzle for Riker to solve, but required the synchronized actions of two people to unlock it. Minuet points out that they didn’t know Picard would be caught on the holodeck too, though one could argue that they adapted their plan to account for him as soon as they realized it was a two-for-one special.
The biggest issue is the squick factor of Riker basically falling in love with a computer program, which won’t be the last time this happens to a lonely guy on the ship. I can understand both why he might fall for the Bynars’ unusual program and why people would be disturbed by it, but overall the delicate matter was handled rather well. I was surprised, but liked the fact that Picard took Riker’s compromising situation all in stride. (Though I still object to people waltzing into other people’s programs whenever they feel like it.) It was also gratifying to see the difference between the captain and Riker. Picard views Minuet as an oddity, a marvelous piece of technology, while Riker is clearly more interested in her as a nice piece of ASCII—a more intimate diversion than an intellectual one. Like a horny teenage boy, he wants to find out just how far he can get with her.
I’m all too happy to overlook that creepiness though, because the rest of the episode was fun and engaging. Riker did ask the right questions about Minuet and their chances at any semblance of a real relationship at least, and as I pointed out to my wife when we watched this, I can’t blame him for choosing a computer program over Troi. Her response: “There’s a difference between Troi and a computer program?” Zing!
Eugene’s Rating: Warp 4 (on a scale of 1-6)
Thread Alert: When you’re wearing athletic gear like this to a Parreses Squares match, you’ve already lost.
Best Line: RIKER: “A blind man teaching an android how to paint? That’s got to be worth a couple of pages in somebody’s book.”
Trivia/Other Notes: The upgrades are meant to correct the holodeck problems experienced in “The Big Goodbye,” but the original episode production order would have established that the Bynars’ modifications caused those problems.
This episode, and Minuet in particular, will also be very important in a fourth season TNG episode, “Future Imperfect.”
Previous episode: Season 1, Episode 14 – “Angel One.”
Next episode: Season 1, Episode 16 – “Too Short a Season.”
I think my biggest problem with this episode (apart from it earworming me with the Rush song with a very similar title), is the payoff. The fact that there was never really any threat or danger is essentially the same thing as “it was all a dream”. Why have you wasted my time telling me a story without conflict? Didn’t the Bynars (stupid name thinking it’s clever) know enough about the Federation (and vice versa, for that matter) to be aware that if they had told them about the problem, a solution would have been found? It’s like the writers came up with a situation and forgot to come up with a decent motivation until they were almost done.
And it just occurred to me that walking in on someone’s holodeck time is essentially shoulder surfing.
This really is one of the first fairly decent episodes. They came up with a fascinating and unique new alien species that is totally believable. Riker, Picard and Data all seemed to be relatively formulated characters that existed beyond the lines they were delivering. The plot itself isn’t brilliant, exactly, but it’s considerably better than some of the others in this season.
Riker gets a lot of crap for his womanizing ways and I know they think he’s a total douche for crushing hard on a hologram but at the time that this originally aired and even when I originally saw it, his position was a fairly sympathetic one for the typical geek/ nerd who didn’t know how to talk to women. Back in those days the internet was just starting to become a thing and I know a ton of people (myself included) who formed relationships with others online in a way that was virtually impossible in the offline world. Maybe it’s pathetic and not the type of thing that anyone would be proud of, but the holodeck as a device for meeting women (or men, depending on one’s preference) without all the social bull that regularly crippled so many people was pretty damned attractive.
The only really odd thing about it is that Riker, skirt chaser (and catcher) that he is would be the one falling in ‘love’ on the holodeck. You’d expect that of Geordi or Wes, not the manslut.
I need to finish watching the episode before I can comment with any certaintly but 2 things come to mind:
Minuet didn’t work for me. I know beauty is subjective but she should at least be hot for me to believe Riker falling for her. Maybe she’s 80’s hot I dunno.
Were the binars an early stab at the Borg?
Probably I won’t be able to get round to rewatching this until tomorrow, so I’ll have to wait until then to refresh my fondest memory of this episode: Riker’s horrible, pseudo-Bogartian pickup lines.
Damn, now I’ve got that Rush song stuck in my head too, and it’s not even that great a song. Although the title isn’t actually the same; the song is “The Body Electric” but there’s a repeated “one zero zero one zero zero one” in the lyrics.
@4 etomlins:
Damn, you’re right. It’s been too long. Time to break out my Rush albums again.
@3 ShameAndFailure:
She wasn’t really 80s hot either. Attractive, but not forget all about your duties hot. The actress was best known for playing Rita Fiori in the later seasons of Spenser: For Hire (which also starred Avery Brooks) after they dumped Spenser’s regular girlfriend.
@2 Toryx:
Oooh, can you imagine the scam holoprograms? “How to Meet Girls”, “101 Surefire Pick-up Lines”, etc. Why didn’t Quark ever think of that?
The song is off the Signals album I believe. The binary numbers =”I”. I’m a huge Rush geek. A fan of Star trek and Rush. I’m lucky I made it out of my parents basement.
@1 DemetriosX
The ending should have annoyed me more, but I actually liked it because their explanation made me laugh. Even if the Bynars were fairly certain Starfleet would grant their request, they couldn’t risk it. On the one hand, what benefit is there to being in the Federation if they won’t save everyone on your planet in the event of a catastrophe? On the other, for all their clever planning, they never thought they might need a backup for their computer, or an uninterruptible power supply?
@2 Toryx
Absolutely, and in many ways, Minuet is perfect for a career man like Riker. He doesn’t have time to raise a family, or even date properly, but he still wants companionship, right? Given that glimpse we had of his leisure activities when he was watching those holographic harpists or whatever in a previous episode, he’s kind of representing what a lot of guys would probably indulge in given this technology. And I like that he isn’t too ashamed or apologetic about it; he’s curious too, and nervous, and swept along. Fake woman or not, we can’t help what we feel, and it has to be surprising experience for him as well. This is kind of like a one-night stand for him.
You’d expect that of Geordi or Wes, not the manslut.
Wait for it. Amazingly, Geordi manages to up the creepy factor when his turn comes around.
@3 ShameAndFailure
Agreed. Beauty is in the eye of the beholder and all that, but Riker consistently has terrible taste in women.
@4 etomlins
I used to like that dialogue–I do enjoy Bogart films–but it was just an awful pickup line. I was more annoyed by the fact that he goes into it making references to the fact that she’s a computer program, and doesn’t think it’s weird at all that she’s so self-aware.
@5 DemetriosX
I think the Doctor on Voyager used the holodeck to teach Seven of Nine how to be more human and date. That’s when he started falling for her, and she fell for Chakotay. Bleh.
Actually the song was on grace under pressure. Off by one album.
@3 ShameAndFailure
Oh yeah, I also wondered about them being precursors to the Borg. I would be surprised if they didn’t influence the development of the idea in some way, and it could be an excellent reason for why the race hasn’t turned up more often in Star Trek. Apparently they are mentioned in an Enterprise episode, though.
I was more struck by their similarities to the Talosians in “The Cage” and “The Menagerie.” That can’t have been a coincidence.
Amazingly, Geordi manages to up the creepy factor when his turn comes around.
Oh, does he ever. One of the TNG episodes I hate most, because it creeps me right the hell out.
This one…meh. I liked that they solved it without violence, that was kinda cool and unusual, but the various plot holes you lot’ve outlined in OP and comments made it fall apart for me. Most especially the “We are giant computer-brain genius hive mind, utterly dependent on our technology for our very identities and lives, so naturally we have completely neglected to provide any plans to cover the impossible-to-imagine problem of ‘what if the big computer goes tits-up?’, even down to not having invented so much as a ZIP drive yet.”
Sorry, this episode is absolute trash.
First of all, Riker is total jackass throughout, even before he gets faint over Minuet. He walks around the ship being incredibly rude to everyone he meets, including Data and Geordi (the “best” line listed here made me gag–what a jerk!) and the Bynars (talking about them to their face as if they weren’t there twice). Someone needs some sensitivity training!
Then there’s Minuet, and there’s just no way in which this isn’t creepy. Not because he’s flirting with a hologram–whatever–but because he and Picard (his boss) double-team her and insist on talking about her as if she weren’t there at all (a theme). First of all, if your hologram is self-aware enough to follow the conversation: NUKE IT FROM ORBIT. Second, what’s even the point of the jazz joint and the hot chick if you’re constantly discussing how it’s just a fantasy? He doesn’t even get lost in the moment. It just comes across as one of those creepy dating experiments, where you try and neg the girl with comments about how fake it all is and then hope she’ll get with you (and your boss…) anyway.
I think it’s also a serious design flaw if red alert and total evacuation do NOT trigger the immediate termination of all holodeck programs. I mean I guess the Bynars could circumvent that, but as soon as Picard opens the door he sees the red alert, so I was confused about how it is that they didn’t figure out what was going on more quickly.
My main complaint though is just the creep factor. It made me feel really uncomfortable for at least 30 of the 44 minutes.
I like the Bynars, but I don’t buy that a computer-integrated race wouldn’t run a backup, and the whole “it’s all binary!” thing makes the ending absurd. If they weighed the options and the options were “Starfleet will help” and “Starfleet will not help,” as long as factors favored the former that’s the option they should have chosen! To say that they evaluated it but then didn’t want to take the chance–that’s not the action of a computer. A computer would have said “OK, 51% chance they will help–ask for help.”
I did like some of the little things, like Data doing the right thing and evacuating the ship (even though he regrets it for some reason??) and the interaction in the corridor about Paresses Squares. I wish they didn’t have Tar actually articulate the joke (“Worf is getting a sense of humor!”) but if you just ignore the explanation, it’s a really nice moment where you can’t really tell if Worf is pulling our leg or not, and that makes me feel more real.
I, too, thought it was pretty ridiculous that the Bynars claim all of this was a set-up for Riker, only to have Picard discover that it requires two people. Continuity, it works, people!
Warp 2
I really don’t know what to contribute about this episode, other than to note the experience of Riker playing a trombone. It’s funny that even the holodeck insults him about his playing.
Though really, this is not at all what I would think of when Riker asks for a New Orleans jazz trio. Maybe I’m being ignorant here, I don’t know NO jazz that well, but what I’ve heard was typically much more rhythmic…
Why did Minuet vanish in the mists just because the Binars left? Did they wipe all of the Enterprise’s memory banks after they re-downloaded their backup? Why do that?
Regarding the representation of the Binars: I am not without sympathy for the actors here, but they really aren’t very smooth at picking up from one another with their lines. If they are actually so closely interconnected that they finish each others’ sentences, there shouldn’t be such a weird pause when they switch off. Maybe this was due to the effects they were putting on the actors’ voices?
I actually like that for once someone wants to take over the Enterprise, and it isn’t as part of a nefarious plot to destroy the world!!!! or whatever. In fact, our heroes show up a little late to save the day from the hijacking — and that’s okay, and everything works out! I kind of like this. Sometimes things happen that aren’t in control, and everything can still work out in the end.
I skip ALL holodeck episodes; once is too many for them.
And to me, walking in on someone else’s holodeck program is too much like (aboard the USS Backintheday) opening the curtain to someone’s bunk- you Just Don’t Do That!
I do think this episode reflects why I didn’t like TNG- a decent concept in a plot that doesn’t hold together and is BORING. Gah.
@1 DemetriosX
I saw this as another limitation of their species, that they are capable only of binary thinking. The answer was either yes or no, and in their way of seeing the world perhaps equal probability of both. Stupid, but bordering on clever.
@12 DeepThought
IIRC, this was reminiscent of the androids finishing one another’s lines in “I, Mudd.”
My guess is the clunky pause was deliberate, done to cue watchers that a new Binar was taking up the train of thought. Too seamless and the point might have been glossed… TV can’t afford to be subtle, you know.
On revisiting this episode, I’m struck by how much more sense it makes if you assume that there’s something a little supernatural going on. It doesn’t seem sufficient to assume that Riker is so entranced by Minuet simply because the Bynars had memorized his crew psychology report. And, as it’s been pointed out, why would it work on Picard as well?
No, there’s a definite atmosphere of the faerie story here. You know, the sort where a man stumbles into a ring of mushrooms, somehow finds himself spending passionate hours in the arms of a gorgeous elfin maiden, then gets interrupted somehow only to find out that weeks have passed instead of hours and he’s been kissing a dead oak tree the whole time. If you assume that Minuet has some sort of glamour on her then she makes more sense. Picard and Riker imagine they’re exchanging witty and urbane banter in a hot jazz club when in fact they’re having a stiff and weird conversation in a cheesy holodeck situation. Then the spell is broken, they tumble out of the fairy ring, and when they go back to look for it later it’s never to be found again.
I actually don’t mind this much. Fantasy dressed up as science fiction has never bothered me (indeed I tend to prefer it a little) and so I don’t mind that this story isn’t quite logical because it’s following the pattern of a kind of story that doesn’t obey the ordinary rules of logic.
Thank heaven Data’s Pinocchio Syndrome isn’t too big an aspect of the plot here…but, wow, I hate to admit it, but even I felt sorry for Data, not to mention Geordi, after Riker’s “blind man teaching an android how to paint” line. What the flying f-!
@16 etomlins
Although I wouldn’t credit the writers for thinking this through, the sorts of energy field manipulations—force field pressures, electromagnetic resonances—necessary to get the holodeck to simulate reality could be also be applied against the physical bodies of users in the room. Recall how Spock originally described how the mind meld works, pressure manipulations, etc., and you can see where I’m going here with this. Whatever would cause a holodeck user to feel various pressures of touch and sensation could no doubt be ramped up into thresholds of pain and, by extension, pleasure.
A race that chooses expedience over ethics could probably tune the holodeck to produce hypnotic effects—cross the Uncanny Valley, so to speak. When they departed, the holodeck might reset to its standard lackluster mode. Hence, what is depicted here is no less believable than the holodeck concept in general.
I have to agree that this is the first one that gave a little glimmer of hope for the new show.
It also pretty much deadlocks Riker as the go-to unintentional comedy character.
@11 Torie
Someone needs some sensitivity training!
I think Riker was just comfortable enough with them to make a joke, and I thought it was a really funny one. Many close friends can make jokes at each others’ expense without taking offense. Of course, if Starfleet had harassment policies like most companies today, it could have gotten him in trouble. And it’s also pretty clear that Geordi doesn’t feel handicapped or sensitive about his blindness, though he does wish he could see “normally.”
I agree that the red alert should stop all holodeck programs, but I assume the Bynars prevented that from happening, just as they got the computer to indicate that Picard and Riker were no longer aboard. And I’m guessing they adjusted their plan when they realized they had Picard too, since they hadn’t yet locked the file. But this is a lot of justification to be making when they could have written explanations into the episode.
I wish they didn’t have Tar actually articulate the joke
This is kind of a funny typo, considering what happens to her.
Another thought: What is actually “speaking” in a holodeck program? The computer?
Then why is the computer’s operational voice limited in so many other ways? Why does it emit that little electronic warble when the “circuits come on”? Why does it always sound like Mrs. Roddenberry? Why can it speak in contractions and idioms when Data cannot?
In answer to why Minuet disappeared: I figured for Minuet to exist required the Enterprise computer, the Binars, and the Binar planetary computer. When the Binars and their computer were disconnected from the holodeck, Minuet was essentially lobotomized. That goes along with the glamor thing suggested, but the magic was technological.
This gets into the question of when are holodeck people sentient, a question I find fascinating. Minuet seems to be in the same league as the Moriarity holodeck character Data creates, and the doctor from Voyager.
@21 ada
I agree, the issue of when massive raw processing becomes indistinguishable from sentient intelligence is fascinating. I have a sense the threshold will never be fully and satisfactorily answered, in the same way we continue to debate what features make humans a unique or special kind of animal… and the way we continue to debate the nature of intelligence.
The problem I have, though, is how could a creation from a computer become sentient when the computer is described as being non-sentient? Wouldn’t everything produced by the computer be able to be transferred back into the computer? This is the nature of data storage and retrieval. If Moriarty is sentient, then the computer that generates the program also stores the essence of sentience.
If we define sentience as simply being aware of some meta-existence (“I know I am a fiction, running on a chip somewhere”), then the only reason a holodeck program wouldn’t know it is sentient is because it is designed and limited not to know that.
I think too often the writers mistake “personality” for sentience. I mean, the Doctor clearly understands that he is a holographic projection. He has that meta sense. Is he not, then, sentient from the moment he first appears?
It does seem like if you ran a sophisticated program like the Doctor for a very long time it would tend to become very quirky, very extended beyond its original design parameters. It would have a “personality,” in much the same way that the way you’ve tuned your PC—the programs you’ve decided to run, the pictures you’ve chosen to display, the folders you’ve organized, the shorthand keystrokes you’ve added—over time make it unique from other PCs. Has it become human?
Don’t they fudge the issue of the ship’s computer is able to create self-aware entities by assuming that the computer somehow expands beyond its normal confines? My memory of the Moriarty episode isn’t so good but I’m pretty sure there’s some mention of the computer using more “power” than before. A similar thing happens in the TOS episode “The Ultimate Computer”. It’s left vague how the extra power causes the computer to expand its processing ability.
Although, isn’t there a TNG episode where the ship’s computer is explicitly said to have become sentient all by itself and not through a holodeck character?
I don’t quite buy the notion that computers are going to somehow become sentient just by throwing enough raw processing ability at them. I can’t back up my objection with anything like evidence, though; it’s just a hunch. To take a limited example that I happen to know a little about, chess programs: it’s pretty widely accepted, although debatable still, that the best programs are superior to the best humans. (I’m not really going to accept that fully until a program is “invited” to a top-flight tournament where it’s up against multiple grandmasters, and also isn’t modified by human minders in any way during the tournament, as happened in the infamous Deep Blue-Kasparov match.) In any case, the advance of chess programs in skill is a perfect example of improvement of computer “intelligence” (of some sort) achieved by brute processing force. But can a chess program ever develop something corresponding to the human thought, “I am playing chess?” Or, to go a step further, could it ever think, “I want to play chess (or not)?” Could it ever, basically, regard the task from the outside rather than merely carrying it through at a human command? I don’t think so, however much power is thrown at the problem.
Or, to go a step further, could it ever think, “I want to play chess (or not)?”
And even if it could, would it be necessarily a good idea?
@23 etomlins
Your points are thought provoking, but at some point machine intelligence must become a distinction without a difference.
This is one of these issues where, as I indicated, the goal posts frequently get moved. A computer successfully mimics one aspect of human intelligence so we zoom out and attempt to redefine and recharacterize or make more granular the nature of intelligence.
In a lot of ways it is like the “uncanny valley” in robotics and virtual reality, the better the simulation the more uncomfortable and disturbing the simulation becomes. People jumped out of their chairs in alarm the first time the Lumiere studio filmed a train coming head on; no one would do that today. The human mind and eye learns to detect the difference.
Your example of imbuing a computer with the “free will” to decide whether it wants to play or not is provocative, but clearly the nature of free will, whether it actually exists in humankind is still hotly debated. If we can’t tell if humans have free will, how then machines?
Your point does raise the question of what kind of processing occurs in a human mind, what sorts of evaluations and prioritization occurs, that allows one to determine whether they “feel” like playing chess. Could you program that sort of hierarchical decision-making into a computer such that—like Kasparov—the independent observer could not tell the difference between a human also making similar choices and refusals?
I think probably you could. But it would severely limit the usefulness of these devices as tools if they could evaluate whether or not they wished to obey a command (to say nothing of raising the gooseflesh of Asimov in his crypt).
@23 etomlins
Getting ahead of ourselves, I recall the striking thing about the Moriarty episode is Data errs in providing instruction to the computer that the simulation should have observational skills that match his own, thereby giving the computer license to provide the simulation with access to a lot more small-d data than the world of ACDoyle. By that definition, the Moriarty simulation should instantly be aware it is inside a simulation, and itself a simulation. Instead, the script unfolds that M is imbued with observational powers that allow the simulation to detect and deduce and access the control panel, thereby giving the simulation control over his own destiny, so to speak.
But I think the point here is that, like “the Ultimate Computer,” the program never really exceeds the sum of its parts. It is essentially working through the decision tree it was provided. Despite the wonder and puzzlement of the crew nothing really remarkable occurs.
The “sentient” moment arrives later, when M realizes his motives are fictitious implants and wants more out of “life” than to serve as a plot device.
Have I beaten this player more than 100 times?
|
YES
|
Has this player gotten angry or frustrated?
|
YES
|
I don’t want to play chess with this player.
I like the idea of Minuet’s personality being the result of the cooperation between the Bynar’s computers and that on the Enterprise. This fits in with my view on another part of the discussion here. My pet theory on where computer intelligence will emerge is that it will not be from one computer but from networks – the networks or parts of the networks that make up the internet starting with the search engines. And, I think we will not know about it until well after it does.
The programs seem to be set up to ‘learn’ what a user usually searches for in order to ‘provide a more useful search experience.’ Being an artist interested in improving my ability at painting the figure, I’m regularly searching for figurative paintings and drawings to learn from. I do multiple searches using different terms. When I started the process, I noticed that if I didn’t include the words ‘paintings’ or ‘drawings’ in an image search including any of the words ‘man’, ‘woman’, ‘boy’ or ‘girl’ the results were almost entirely photographs and you can imagine the results if the search included the word ‘nude’. Now, if I do the image search without the words ‘painting’ or ‘drawing’ I can get a decent selection of art works among the search results. The system is learning that I’m searching for art – not pornography. However. If I type the same search on another computer (on which the cookies have not accumulated that knowledge about me) I get the kind of result I got on my computer at the start of the process.
So. Here’s my point. Is there a threshold beyond which a computer system or network could become capable of operating beyond the (intended or unintended) limitations of its programming? Is there a point of unintended consequences beyond which a computer could be able to combine elements and functions from different programs to produce results far beyond what one might expect based on the intentions of those programs? Is there a point where the system or network can become aware of patterns in its accumulating knowledge then through association of data make connections that lead it beyond its programming?
Now, to tie this back into the discussion, Could Geordi have crossed that point of unintended consequences when he set the parameters for Data’s opponent in that holodeck program? But, in this case, was it just that part of the program that crossed the threshold and not the full computer? Could the Bynars have intentionally set the system to cross that threshold but with the safety that it wouldn’t last?
Questions, questions. This is part of why I do like this episode.
@ 20 Lemnoc
Then why is the computer’s operational voice limited in so many other ways?
They’re probably just audio cues… I don’t need my phone to talk to me when I switch it on or get a text message. I don’t even really need a startup sound when I switch on my computer.
I don’t have much to add to these ruminations about computer sentience, but I’m definitely enjoying the discussion. I was just struck by the thought that I think in Star Trek, they define sentience strictly as self-awareness. I also had the notion that creating an “opponent capable of defeating Data” only creates a sentient being if you believe that Data is sentient, or perhaps, if you don’t believe Data is sentient, then the Moriarty program is actually more of a person than he is.
I have to believe that once word on Moriarty got out, Starfleet would either a) lock down their computers to prevent this from ever happening again, or b) start doing serious work on artificial intelligence. I could see Dr. Zimmerman experimenting in this area without authorization, in order to make his holographic program more effective. And once word gets out on the Doctor, what’s to stop Starfleet from creating holographic captains, with the thoughts and experience of Kirk, Picard, Sisko, Archer, Janeway, etc? An emergency crew that kicks in when everyone’s dead or incapacitated… Or perhaps “unmanned” long-term missions into deep space with holographic crews.
One of the flaws in Trek, I think, is they explore these intriguing concepts and then drop them to keep things the same as they’ve always been. Many of the discoveries and experiences of the various crews should have had profound impacts on society.
@ Eugene #29
I’m in total agreement about the sort of thing that they should have done — a ship full of Datas or of holographic crew members would be the smart way to explore space (just as in real life we’re better off sending a lot of robots to Mars than people in tin cans). As you say,
“Many of the discoveries and experiences of the various crews should have had profound impacts on society.”
The problem (as I see it) is that many of those discoveries would be profoundly impactful in a way that totally breaks the series. The technological breakthroughs they have typically point to some kind of post-Singularity future where people are rendered pretty much irrelevant because better alternatives are available. (Of course, Roddenberry would’ve been staunchly opposed to this; c.f. all of Kirk’s interminable speeches about the human spirit blah blah blah). By the time it’s feasible to replace Starfleet with sentient holograms zipping about the galaxy, the sensible thing for our heroes to do is to join the Borg… at which point I think we’re no longer watching an intrepid adventure story, but a dystopian horror.
As to the broader conversation:
Regarding computer sentience, I am very much in the “strong AI is not possible” camp. Intelligence is not a matter of processing power or network effects; it’s very intimately connected to our physical brains. What we think of as intelligence is a result of emotions as much as logic. And ultimately human thought is about the ability to remap physical senses & emotional states onto abstract concepts and understand the abstract ideas thereby. Who knows how this happens; but we won’t have actual AI until it does. But to make it happen, it can’t just be a program running on a general-purpose (or even highly specialized super-)computer any more; it’ll be a brain and a body that just happen to be all made out of silicon.
Lemnoc says that every time we reach an AI breakthrough, the goalposts get moved — but I think this is because every time we reach a milestone, we realize that what we thought of as a standard of intelligence really isn’t, that we’d set the parameters wrong, that we’re training a system to do a whole suite of tasks but never really understanding what it means to be human.
We’re talking about a society that doesn’t believe in the utility of fuses or off switches, so it is hard to imagine Starfleet imagining that fail safes might be useful.
Where the concept really breaks down—particularly in the Moriarity ep—is that the computer, for the sake of generating a character and facilitating make-believe—would allow said character to access the computer’s higher command and control functions. There is no way, in a sane universe, M could say “Arch” and the command would register and be obeyed as though it came from some meta source outside the program or system. I mean, the computer knows the computer is doing the asking, yes? M could be as self-aware as imaginable, but the command just simply would go nowhere.
I mean, certainly there must be some kind of lock to make sure the children aboard the Enterprise couldn’t just go tinkering with the ship’s higher c&c functions…. Oh. Wesley.
I agree with @30 DeepThought that strong AI just isn’t going to happen. Apart from his very cogent arguments, there is the fact that computers as we can conceive of them now approach a problem as a series of yes/no or if anyone ever implements fuzzy logic yes/no/maybe questions. But the human brain is more likely to come up with yes/no/maybe/purple/ham sandwich/17/duck when looking at a problem and to find the solution in 17 purple ducks eating a ham sandwich. The yes/no/maybe approach is highly dependent on initial conditions and the way in which the question is framed.
As for TNG never realizing the full effect of their discoveries, not only did they come up with a way to put Jim Kirk on the bridge of every ship in the fleet (which was the first thing my friends and I thought of after the Moriarty episode), they would eventually prove that Dr. McCoy was right about the transporter, conveniently forget that higher warp speeds are rupturing the fabric of space, and come up with immortality (downside: you have to go through puberty every few decades). They did manage to create arcs where the main characters changed over the series, but applying that to the universe of the show itself was still a bit beyond them.
Arguing that machines aren’t going to duplicate the [irrational] aspects of human thought and behavior is a bit like arguing bicycles are never going to have three wheels. To pursue the goal would limit the utility of the tool.
IMO the larger point is that, from the standpoint of an outside observer, distinguishing between decisions rendered by a machine and those arrived at by a human could come to be a distinction without a difference.
Several years ago, programmers built software that could contribute to a chatroom. The program used transactional analysis techniques. Someone would chat, “I think I’ll go fishing in Vermont this weekend.” The program would answer, “What is is about fishing you like?” “I like getting outdoors.” “Getting outdoors in Vermont is nice this time of year. Tell me more about fishing…” and so on. Many people in the chatroom would spend up to an hour conversing with the machine before they clued in that an AI was at work. And this was a relatively unsophisticated application.
Moore’s Law predicts a doubling in computer capacity every 18 months, a trend that has not slowed since it was originally proposed. Recall Sulu’s caution: “Imagine you had a penny and doubled it every day. In a month, you’d be a millionaire.” Considerably less than a month, it turns out. Then: What’s one million x one million?
Imagine you have a jar with microbes that double every day. When is the jar full? The day after it is only half full. When is a second jar full? The following day. And so on.
We’re just on the threshold of highly advanced computers approaching the processing capacity of the human brain. It took a long time to get here, but here we are. And I do agree with the earlier post about networks being the real interstitial relays where the extraordinary leaps may occur, the way dendrites and synapses connect neurons in our own brains.
I guess my point here is, without disagreeing with the interesting points made, declaring something will never happen may be a bit premature….
Pretty much everything I could say about this episode has been said. The writers, like so many writers, coe up with a contrived situation and it’s like no one ever challenged them on how that would really work. I’ve been a table-top RPG player for 30+ years and a gamemaster for nearly all of those. I can tell you that until a plot is tested and questioned you don;t know what’s going on. The Bynars are only inventive to people unfamiliar with computers.
As to the idea of Computer Awareness and intelligence, I too hesitate at the ‘never’ declration but it’s going to be a very difficult problem to solve. Princioally because we don;t know how our awarness ticks. I’ve explored AI awarness in several stores of my own because I truly believe that it will not be a person in a box, or a free floating human brain, but it will be, if it happens, a truly alien inteligence.
@ 31, Lemnoc
“Where the concept really breaks down—particularly in the Moriarity ep—is that the computer, for the sake of generating a character and facilitating make-believe—would allow said character to access the computer’s higher command and control functions. There is no way, in a sane universe, M could say “Arch” and the command would register and be obeyed as though it came from some meta source outside the program or system. I mean, the computer knows the computer is doing the asking, yes? M could be as self-aware as imaginable, but the command just simply would go nowhere.”
But how would the computer determine the source of the command? For the ease of explanation by a minimally computer literate person, we’ll say the Enterprise’s computer system was operating on BASIC. I’d say the computer ran the command against a series of “If, Then” checks. However. M had passed beyond the threshold I suggested earlier and was operating beyond the limitations of the programs. In this condition, M may not have been constrained by the “If, Then” safety checks. While the computer may not have known who issued the command, the checks did not detect the logic hooks so the computer defaulted to accepting the command as coming from a living being.
On the other hand. Looking at how the Bynars had evolved (by choice – it seems) to become extensions of their computer network, one can use this same logic to explain why they made the choice to take the Enterprise. No matter how much common sense says that the UFP and Star Fleet would have helped them when asked, their “if, Then” checks always indicated a chance of the answer being “No.” From their point of view, a “No” answer would have not been acceptable and they had to do something to make sure that “No” would not be a possibility.
@5 Ludon
A subsystem on the computer was generating the command. Would not—if for safety reasons alone—the computer recognize what items on the holodeck were artificial (and subject to deletion) and biological (and not subject to deletion)? Would not it recognize commands being delivered by fictions generated within its own recreational make-believe?
We’re getting way ahead of ourselves (which I regret, but I reckon we can take this up again when the episode airs), but at this point M had not passed any threshold. M was a computer simulation erroneously (by Geordie, I think someone correctly pointed out) given observational capacity beyond the ACDoyle source material. M observes the Arch called into being. M calls the Arch into being, and therefore is able to pass beyond the threshold. Significantly, M has not passed the threshold until M is able to command the control panel, whereupon all hell breaks loose. Even then….
My point is that M calling for the Arch is like you and me, knowing the existence of God, calling out, “God, appear before me now.” It is a command not subject to compliance. It is a command that will generate no response. Try it, you’ll see.
The computer should not entertain a request from one of its subroutines to self-destruct. At least, in any other sci-fi program other than “Dark Star”!
—
At the risk of boring everyone, I recommend this link:
http://qntm.org/files/hatetris/hatetris.html
It is difficult to play this otherwise familiar little game without gaining a sense the program is both perspicacious and malevolent, out to get you. In that sense, the experience is very unlike most [usually helpful] computer routines. It is very much an AI experience, but it is tough after a few rounds not to assign an emotional quantity to the levels of frustration this widget is dishing out. You start to resent this bastard.
Just an illustration that AI can absolutely be nonhelpful. A cautionary tale.
@ 36 Lemnoc
I think we’ve reach the point of the issue that illustrates what I meant when I said we might not know about it until after it happens. We have reached the Chicken and the Egg question. Does M saying “Arch” push him past the threshold? or does M making the connection that leads him to say “Arch” do it? I hope we haven’t talked it out here because that episode is likely to have a lively discussion.
@ 29 Eugene
“One of the flaws in Trek, I think, is they explore these intriguing concepts and then drop them to keep things the same as they’ve always been. Many of the discoveries and experiences of the various crews should have had profound impacts on society.”
And I see this fitting in with the discussion of Star Trek’s place in TV history. At this point, Star Trek was still clinging to its roots with the cosmic reset switch being thrown with the rolling of the closing credits each week. It was not until DS9 that Trek seemed comfortable with letting each week build upon the previous week’s triumphs or setbacks. In some ways, Voyager (at least what I saw of it) seemed like a slight step back but I guess that was due to the nature of road-shows.
@Lemnoc #33 & others in thread:
Arguing that machines aren’t going to duplicate the [irrational] aspects of human thought and behavior is a bit like arguing bicycles are never going to have three wheels. To pursue the goal would limit the utility of the tool.
I think a great many computer scientists would agree with you — but that’s exactly why we’ll never have strong AI (by which I mean artificial sentience). If we’re only duplicating rationality, we’ll never have intentionality, passion, consciousness… just really good chess algorithms. It’s like John Searle’s thought experiment of the Chinese room — even the most amazing algorithm is still just a set of instructions. They might elucidate patterns we’d never have noticed; but that’s not intelligence, and we’re going about it the wrong way if we think you can get sentience from throwing more processing power at that. Watson may be able to parse wordplay, but it can’t laugh.
Several years ago, programmers built software that could contribute to a chatroom…
Chatbots are actually a great example of what I’m saying. ELIZA is 45 years old. No one really claims it’s artificially intelligent — but it still makes for a decent talk therapist just from pattern matching. With all respect to the Turing test, it’s not a wise plan to base your measure for sentience on the judgment of a species that sees agency in a rock. What we’ve been calling AI is the ability to simulate intelligence & accomplish specific tasks. We’ve gotten really good at simulating intelligence (especially in our undergrad seminars!) & it’s not that hard to fool a human, but there isn’t a “there” there when you’re dealing with instructions that are not tied to a specific, embodied entity that interacts with the world.
I guess my contention is this: artificial sentience may be possible, but it requires capabilities beyond those of a Turing-complete computer. At the very least it requires an entity with the ability to interact with the world and the ability to apply the same sensory pathways to its own functioning which it does to that outside world. I would wager it also requires the existence of externally enforced needs which can constitute an existential threat if they are not met — that we cannot have sentience without the threat of death. But now I’m getting pretty far afield — but I’ll definitely be revisiting a lot of this when we get around to Measure of a Man!!
What a great conversation. I’m looking forward to “Measure of a Man” for sure now.
@ 12 DeepThought
I still think the holodeck shouldn’t be that familar with slang. If Riker says “and a bone for me,” a thigh bone should be materializing in his hand, not a trombone.
@ 13 sps49
But some of them are good! Well, we’ll see.
@ 16 etomlins
That would make this episode make a lot more sense. I think they were going for the sufficiently advanced technology angle, so it’s close, but it doesn’t quite work as-is.
@ 19 Eugene
I couldn’t have planned a better typo.
@39 Torie
If Riker says “and a bone for me,” a thigh bone should be materializing in his hand, not a trombone.
I was too distracted by double entendre to be too annoyed with that, but you’re right.
What if it had given him a traditional Tibetan thigh-bone trumpet? That’d be the best of both worlds!
I have to disagree slightly on that last matter; having done some work in speech-recognition as a linguist, we can do a certain amount of context-sensitive word recognition now – I’d be very, very surprised if computers of the sophistication they have wouldn’t be able to do a better job of it than the average non-native speaker. I mean, the probability that someone in a jazz club, discussing making music with the band, should mean “I would like a part of the skeleton” rather than that he’d be asking for something, y’know, related to the context. There’s a great project going on, at CalTech I think, where they’re basically trying to define all the words of English, with all their meanings, and (among other categories of data) suggest the domains of speech where a given word is more likely to be found; again, I would expect this to be a standard programming tool by this time in the future.
Where I’d expect the computer to go wrong in that context, the jazz club, is say if Riker said he needed something that didn’t fit the context, but had a synonym that might. Sort of the opposite situation: Riker musing “I wish that monk was here now!” and having the computer produce Thelonious, rather than the Bajoran monk who’s the villain in this week’d episode that he’d been actually thinking about.
Slang is less difficult than people think. Idioms, now, idioms are absurdly hard. Recognizing whether someone is referring to the real or the imaginary or the counterfactual…that’s tricky. Slang is usually restricted by domain (that is, there are areas of speech in which it is not or little used), and tends to have fairly small actual added content to the definition of a word: usually a single extra meaning, which again can often be determined by both speech and general context.
This has been your linguistically nerdy moment. :)