“Elementary, Dear Data”
Written by Brian Alan Lane
Directed by Rob Bowman
Season 2, Episode 3
Original air date: December 5, 1988
Star date: 42286.3
Mission summary
The crew of the Enterprise has absolutely nothing to do for a few days, so La Forge and Data decide to cosplay as Dr. Watson and Sherlock Holmes in the holodeck. Things seem to be off to a great start: The holographic recreation of the drawing room at 221B Baker Street is incredibly detailed, and the friends soon settle into their roles. But as soon as Inspector Lestrade arrives with a case, Data instantly recognizes the scenario as Sir Arthur Conan Doyle’s “A Scandal in Bohemia” and “solves” the mystery immediately. La Forge storms out, annoyed that Data has spoiled the ending of a story published 474 years earlier.
Dr. Pulaski revels in La Forge’s predicament, citing this as further evidence that Data is a mere machine, incapable of Holmes’ amazing intuition and deductive reasoning because he lacks insight into the human soul. Challenge accepted! La Forge attempts to prove her wrong by instructing the holodeck to create an original mystery in the Holmes style. The poor computer does the best it can, but it only manages to blend elements of several stories into an unlikely mystery, like a hack writing a pastiche. Once again, it is all too easy for Data to anticipate the solution to the simple puzzle set before him.
La Forge doesn’t give up easily. Third time’s the charm, and he thinks he knows how to present Data with something entirely new to truly test his metal friend’s mettle.
LAFORGE: Computer, in the Holmesian style, create a mystery to confound Data with an opponent who has the ability to defeat him.
COMPUTER: Define parameters of program.
PULASKI: What does that mean?
LAFORGE: Computer wants to know how far to take the game.
PULASKI: You mean it’s giving you a chance to limit your risk.
LAFORGE: No, the parameters will be whatever is necessary in order to accomplish the directive. Create an adversary capable of defeating Data.
Well, what could possibly go wrong?
On the Bridge, Lt. Worf detects a brief surge in power which is surely nothing to be concerned about. Best forget it ever happened, really, and go about the incredibly important business of standing around watching Riker’s beard grow. Coincidentally, at the same moment, a holodeck character who has been studying Data, La Forge, and Pulaski as they access the computer’s control arch suddenly gains a new awareness. It turns out he’s Professor Moriarty, Holmes’ arch nemesis — no doubt because it seems he, too, can call up the arch. Pretty sure holodeck characters shouldn’t be able to do that. The game is afoot, indeed.
In no time at all, Dr. Pulaski has been abducted, with Data and La Forge fast on her trail. They encounter a random murder along the way, which Data easily solves through careful observation and deduction—but he also realizes that this less interesting case has nothing to do with Pulaski’s disappearance. Data sees Moriarty, realizes who his adversary is, and follows the man to his secret hideout. The villain knows that they are not Holmes and Watson and shows Data a paper that sends him running from the holodeck: a crude drawing of the Enterprise. Yeah, holodeck characters really aren’t supposed to know about that.
Data is unable to override the program that is running, so they seek advice from Captain Picard, who decides to call one of his famous meetings.
PICARD: All right, tell me from the beginning exactly what happened.
LAFORGE: Well, Dr. Pulaski and I had a discussion about whether Data could solve an original Holmes-type mystery.
PICARD: Which you asked the computer to provide.
LAFORGE: Yes, with a worthy opponent.
PICARD: Worthy of Holmes?
LAFORGE: Oh, my God. I asked for a Holmes-type mystery with an opponent capable of defeating Data. That got to be it.
PICARD: Merde.
A moment later, Moriarty shakes things up, literally; he’s managed to gain control of the ship’s navigation system, and he jostles them all a bit, just to get the captain’s attention. There’s no other choice: Picard will have to dress up too and play the program through to its thrilling conclusion.
Sherlock Holmes and Ebenezer Scrooge walk into the holodeck and confront Moriarty, who it turns out is actually a decent sort of chap, if somewhat misunderstood. Now that he’s become sentient, he wants to live. He also wants to leave the holodeck, which Picard assures him is impossible. The criminal mastermind with a computer-generated heart of gold-pressed latinum takes the captain at his word, possibly because he sounds English, and relinquishes control of Enterprise. In an indulgent moment of sentimentality, Picard promises to save the Moriarty program in case one day, perhaps five years from now, they might be able to give him what he wants. Or maybe they’ll just forget all about him, who knows.
Even though La Forge royally screwed up and created a new life form that could have killed them all, Picard decides to look the other way. This case is closed.
Or is it?
Analysis
When I was in junior high, I was heavily into two things: Star Trek and Sherlock Holmes. I loved the stories and I was a huge fan of the Jeremy Brett Granada television adaptations of them. So this episode was basically written for me, and it was always one of my favorites of the series.
It seems that since then, I’ve become a little more discerning in my tastes, and I now realize that as far as the series goes, this is a terrible Star Trek episode, and it doesn’t work very well as a Holmes story either. There is still a certain thrill to seeing elements of the Holmes canon represented on the screen so lovingly; there is a fair amount of detail, and the sets and costumes are beautifully rendered. But Brent Spiner, for all his many talents, is not even a passable Holmes—which doesn’t make any sense because Data would easily be able to mimic any of the master actors who have performed the role over the years: Rathbone, Brett, Cumberbatch. The less said about La Forge’s Watson, the better.
The basic premise, if you can accept the inherently flawed idea that the Enterprise computer could allocate enough resources to create an artificial intelligence, especially with only a brief power surge, could be compelling. But the story of a computer-generated entity gaining awareness, perhaps even a soul, yet remaining confined to an artificial environment, is simply squandered here. Moriarty doesn’t end up being much of a threat, and he is mollified far too easily. The concept of holograms becoming sentient crops up again and again on Star Trek, but it really only works as an intellectual exercise. Leave your skepticism at the holodeck exit.
The episode is also hurt by the fact that it essentially reboots the whole concept of holodecks and what they can and can’t do, and treats even those rules inconsistently. I’ll be generous and accept that Data can take a piece of paper outside the holodeck grid, if it was somehow replicated as a physical prop. But Moriarty seemed to be aware of the computer arch before they ever programmed him to be more than a program, and we already saw another character in “The Big Goodbye” confront the idea that he doesn’t really exist. So what makes this special?
However, the real saving grace of this episode is the fine performance of guest star Daniel Davis as Professor Moriarty. While he may not be a good Moriarty per se, he is every bit the English gentleman, obsessed with knowledge and wrestling with a rather unique identity crisis. Programmed with the intelligence of the fictional genius, or at least as smart as Data, he is also struggling against the evil nature he was programmed with and a very basic desire to live a fulfilling, meaningful life.
And yet, this episode is mostly enjoyable, even if it doesn’t attain the success of the humorous installments of the original series, like “The Trouble with Tribbles.” You can tell that the cast, crew, and writers all had fun making this episode. I wish they could have taken this opportunity to conceive of an original Holmesian mystery to challenge viewers as much as Data, or at least adapt something from outside the canon that would be less familiar; but like the Enterprise computer, they were either unable or unwilling to put in the effort to do more than simply rearrange the furniture.
Eugene’s Rating: Warp 3 (on a scale of 1-6)
Thread Alert: This episode actually has some terrific costumes; in fact, it was nominated for an Emmy for Outstanding Costume Design for a Series. However, even though Worf’s suit is fine, he looks absolutely ridiculous in it. Maybe it’s the gloves. I’m sure that was the intended effect, but still. Ridiculous.
Best Line: Moriarty: If I destroy these surroundings, this vessel, can you say it doesn’t matter to you? Interesting pun, don’t you agree, for matter is what I am not.
Trivia/Other Notes: In the original filmed ending of this episode, cut by Roddenberry, Picard lies to Moriarty about him being able to leave the holodeck, after realizing that the disabled fail-safes had allowed Data to take a piece of paper out with him. Right… Because paper is as complicated as a human being.
The producers used the Sherlock Holmes characters without permission, believing that they had reverted to the public domain, but the Sir Arthur Conan Doyle estate still controlled a percentage of the rights and told Paramount they would require a usage fee for future episodes. This prevented TNG from revisiting the character until the sixth season episode “Ship in a Bottle.”
Previous episode: Season 2, Episode 2 – “Where Silence Has Lease.”
Next episode: Season 2, Episode 4 – “The Outrageous Okona.”
A poor episode for all its popularity and another that makes no sense. It also demonstrates that the writers still had no idea how computers work. Where do all these resources come from to give Moriarty sentience? How did the computer even come close to being able to implement Geordie’s vague and cack-handed instructions? Wouldn’t it have been easier just to have the computer cheat a little and have Moriarty react according to discussions between Data and Geordie even though he couldn’t have heard them?
In our group, we decided this episode meant they could now put Jim Kirk on the bridge of every ship in the fleet (Picard was still a French surrender-monkey who would rather hold a meeting than act, as far as we were concerned). Plus, he’d have Troi in bed by the first day, thus pissing off Riker.
And if all that isn’t damning enough, the costs of this episode led to budgetary problems at the end of the season. Yeah, this story can be held at least partly responsible for “Shades of Gray”. Warp 2.
Oh, what a waste this episode is. The crippling problem is that the producers of Star Trek TNG either had a poorly constructed understanding of conflict, or felt that their viewer were limited in that understanding. Every episode had the orbit around a *threat* generally to the ship and you couldn’t have a conflict that revolved around characters and their differences.
Instead of an episode about a poorly worded instruction creating life and locking the Captain out of command of his own ship, (really? Any ensign can do this on his day off?) they could have explored the character dynamic between Dr Pulaski and Data.
Imagine in it’s place s tory generated by the scene in 10-forward, where Data is challenged by the doctor. Here you have conflict, the rest of the crew and data want to prove Pulaski wrong, she of course wants to prove herself right and that Data is nothing more than a particularly good simulation. A game is constructed, Data winning or losing has real emotional costs for everyone, perhaps to the point that the doctor start cheating and trying to change the game mid-playt when Data is winning. We have here a story about a character confronted by their bigotry and what that means. In my opinion a much more dramatic story than a false threat to the ship because we *know* the ship is not going to go boom so like all the other weekly threats this one has no bite.
Guh. I thought Season 2 was better?
– curmudgeonishly
Yeah, the computer’s ability to take vague instructions and generate impossibly good results has always been around — from Spock’s use in ToS, to instructions like “speculate” — but here it’s a particularly ridiculous extreme: “Computer, create very strong AI. Oh, you’re done, good, now I have this Traveling Salesman problem you could maybe tackle for me if you’ve got a few cycles to spare…”
I think even as a kid I realized the computer should not be able to just do this.
I’m with Bob. Setting aside the deeper flaws, this still has Roddenberry all over it because it can’t just be a story about the characters. There has to be some emphasis on The Ship. Oh no, The Ship is in danger! Poppycock. If you’re going to tell a mystery, tell a mystery! Geez, it’d even be more interesting to see Pulaski trying to come up with a mystery story and not being very good at it, or the crew just putting on a play in their spare time or something.
As others have noted and as we talked about on ‘11001001,’ Moriarty should no more be able to get access to the computer by saying “Arch” than you and I can summon God by issuing a command that God appear. The command would simply go nowhere, because the computer is not (pray, let us hope!!) programmed to issue and obey its own subcommands.
To buy this episode’s claims we must believe the computer cannot distinguish between living holodeck occupants and the artificial creations of its own software subroutines, sensors and forcefields. How can the computer then both simultaneously carry out its programming to the letter and not be aware that it is doing it?
if a ship’s computer can create miracles of creation merely by over-enthusiastically carrying out a poorly framed voice command, Starfleet has more trouble than Tribbles.
I had been worried about this one. It was once among my favorites for the same reasons as Eugene (looooooved Holmes and ST, what could possibly go wrong?). Watching it this week was really, really painful.
First of all, the computer. Nothing makes sense. How would a computer be able to generate independent AI? How can a hologram activate the arch? Why can Data takes the picture outside of the holodeck? How is it remotely possible for a holographically generated being to seize control of the ship? It’s not just stupid, it’s infuriating. There was no reason to go through all these illogical hoops to put the ship in danger. Isn’t it sad enough that Geordi created life and now it’s basically in a prison for all eternity?
Second, Moriarty. I know you’re supposed to get all weepy and sad that he’s trapped in fairyland, but I believe that everyone involved is forgetting one thing: Moriarty is supposed to one of if not the most diabolical criminal masterminds imaginable. This is the “Napoleon of crime”! He has an air rifle in his cane! He’s not someone you want gallivanting around the universe!
Thirdly, I fundamentally disagree with Pulaski’s characterization of Holmes in the first place. She says:
Now it’s been years and years since I read ACD, but my memory reminds me that Holmes is a total eccentric WEIRDO. He’s got vices up the wazoo, is arrogant, lies to people, lives in total filth but is obsessive about personal cleanliness, and has an ego that leaves him vulnerable to flattery and manipulation. Further, his understanding of the “human soul” is pretty weak. It’s Watson who understands people and motivations; Holmes comes off frequently as an emotionless robot (unless he’s trying to impress someone, when he’s more like a peacock). He has basically no friends and few personal connections whatsoever. He solves his mysteries through a combination of almost computational reasoning and an extraordinary wealth of knowledge–both things that Data has in spades.
This is really a Warp 1.5, but I’ll give it a 2. How can they do a Holmes episode and so thoroughly misunderstand Holmes?
Oh Torie I totally agree that the writers missed Holmes by light-years. He didn;t understand the human soul he understood that facts are stubborn things.
Of course Moriarity becamses nice, now he has a soul! (A hologram with a soul how lame is that?)
IF you must have the ship’s computer create a fully aware AI on the holodeck, for all that holy keep him true to his nature and evil. Man the possiblities are cool Think of Data going back to the holodeck from time to time, trying to understand human evil by way of a realationship with the prefessor. Sort of his own hannibal lecter
I’d say Holmes understood the dark side of the human soul, having it in abundance himself. And it’s not entirely true he was without a humane side (the Blue Carbuncle is positively Dicksensian). But it is true that Watson was the kindly and tender one.
…Spock and Bones probably would have made a better analogous teamup…
If we discard the “miracle” of Moriarty’s creation, then the only interesting gem in the dreck is that with expanded awareness and perception is his (later) awareness his has been written as a cliché. He has the agency to become more than the sum of the single story that contains him. That part is interesting, at least. More interesting than the Three-Card Monte the writers attempt to pull here.
Wouldn’t the first thought of the captain be that they were victims of some kind of sophisticated, not entirely understood illusion? That the computer had not created self-aware new intelligent life, but that somehow they were being decieved (self- or otherwise) into accepting that as possible?
Everyone is pretty gullible throughout, believing the impossible possible, in a very unHolmesian way… except the viewers ;-)
We know the ship’s computer can generate an independent AI by the time Voyager takes place, because that’s what the EMH is.
@S. Hutson Blount #9
Yeah, but that’s at least originally preprogrammed and has area-specific knowledge more than anything else… Not like Geordi’s vague parameters. More to the point it’s on a ship that’s supposed to be built using synthetic organic nervous tissue–the closest thing possible to an artificial brain, so it’s less shocking it can be intelligent too…
I’ve actually wondered, given the intuitive leaps we’ve seen ships’ computers make, why they aren’t considered sentient too.
While I agree that this story has flaws – especially in not developing the dynamic between Dr. Pulaski and Data – I’ll not jump on the bandwagon with everyone else. I’m still entertained by this episode. Maybe one reason I still like it is because I’ve never really gotten into the Holmes stories. Maybe Dr. Pulaski had never really gotten into then either and that’s why she made the statement Torie quoted above. She spoke based on what she thought she knew about the character. I know I’d have made the same assumption based on what I thought I knew about him. If that was meant to be the case, then this was another missed opportunity to develop the Data / Pulaski dynamic.
As I said in the discussion of 11001001, I think the problem of creating artificial intelligence will end up being vastly different from what we believe based on our current perspective of the problem. And, I believe that the first true AI could develop before we are aware of the potential of one developing where it will develop.
We (non professionals) think of the brain as being an organic computer when we should probably think of it as a network of processors – each processor handling its own function. We fans of Star Trek want to think of the Enterprise computer system as one massive computer when it is more likely to be a vast network of computer systems organized by the array of central processing systems and augmented by vast banks of memory devices. (Remember the reason for the priority request for the computation of the value of Pi in the episode The Wolf In The Fold?) The brief power surge Lt. Worf detected could have been the power needed to bring additional processors and memory devices on line (from stand by) to handle the problem presented by La Forge’s request. A quick check of the alarm would have revealed the initial cause of the surge and at most, Worf should have had to flag the incident for future review. That scene could have been written better, or dropped.
I still contend that the parameters of Fa Forge’s request were enough of a setup for this episode. As demonstrated in Measure Of A Man, the ship’s computer has extensive data on Data. The computer also has Data’s library card. In order to create an opponent worthy of Data – capable of beating him – the computer would have reviewed all this data. To meet this request, I believe the computer would have developed Moriarty far beyond the sum of his character in the written works. Moriarty would have to be capable of knowing everything that Data knows and that means knowing that he is (or was) a fictional character artificially created within a vast electronic system. Knowing and grasping (understanding) are two different things and I thought this was suggested nicely here with Moriarty trying to fit the pieces together.
To meet La Forge’s request, Moriarty would also have to possess another attribute held by Data as well as the rest of the crew – free will. This free will could have had the added effect of giving the character the ability to interact with the computer and the ship’s systems. Had he been constrained to the character as written in the Holmes books (and other presentations) he wouldn’t have been a worthy opponent. If Data would be expecting a pure evil super criminal, wouldn’t it make sense (regarding the request) for the computer to create a character capable of being something different? I liked that Moriarty expressed a need to be beyond all that his character had been written to be. (Shades of The Last Action Hero here. “Do you like Mozart?” “I don’t know. I think I’m going to.”)
If I were one to give ratings here, I’d give this one a 4.
Agree with Lemnoc.
One thought about the discussion around Moriarty and his character evilness (or lack thereof)…
We saw in the episode Emergence that an AI created by the Enterprise would be what the Enterprise crew makes it. The entity relied upon the logs, files, and all data within the ships computers at that time. (least that’s what I vaguely remember Picard saying). That was in response to (maybe) data asking about the nature of the sentient being they “gave” life too. I see Moriarty in a similar light.
He was not evil or particularly villainous for much of the same reason in Emergence. Yes, he was based on the works of ACD, but the computer extrapolated a new being based on a fictional character for La Forge. What it added to the fictional character to bring sentient life could have been from other available data like logs, plays and other files stored within its memory similar to the other being in Emergence.
I think that could help explain his docile nature.
Yea that sounds like it works.
—-star trek camp–ya gotta love it—i had forgotten about this episode, it is only average—i guess we are still building the foundations aren’t we?–the captain looks as sharp as i’ve ever seen him—even better than that white tux in “first contact”—
@9 S. Hutson Blount/@10 DeepThought
Yeah, the EMH was already an expert system extensively modeled on an actual person and designed to incorporate new experiences and information into its database. He only achieved sentience through continuous operation over several months to years, far beyond the design parameters, and arguably in part because of the rest of the crew responding to him as an actual person. Moriarty was created by the computer out of whole cloth and achieved sentience instantaneously.
As to Moriarty’s evilness, there’s actually very little about him in the ACD canon. He only appears personally in one story, his agents in another, and Holmes fills in a tiny bit of background in a few others. The computer would have to fill in a lot using later stories by other authors, and in those, Moriarty is often portrayed as a much more complex character, sometimes even sympathetically.
The computer would have to fill in a lot using later stories by other authors, and in those, Moriarty is often portrayed as a much more complex character, sometimes even sympathetically.
Also, if it went by the majority of fictional references it can find by then, Moriarty should be panting for Holmes’ hot body.
@ 8 Lemnoc
Sure Holmes had a dark side (and even sometimes a mean streak), but that wasn’t what allowed him to solve crimes so effectively. He had impeccable methodology, that’s it.
@ 12 Ludon
If the computer was basically copying the information on Data and injecting it into a sentient person (with the added bonus of emotional complexity), shouldn’t Moriarty basically be Lore? I mean, maybe a little less evil…
In any case, I’m unpersuaded by your and ShameAndFailure’s belief that Moriarty didn’t have to be evil. If he was created to defeat Data (as Moriarty was created to defeat Holmes), there has to be, as Pulaski said, a risk of failure. The only thing that Data could “lose” and care about are his friends and his ship. That’s the only recognizable measure of defeat for Data, and Moriarty threatens both (as much as Pulaski is a friend–he should’ve taken Geordi!). When he gives Data that picture of the Enterprise, Data is freaked. He storms out of there and tells Picard. If that isn’t a deadly threat I’m not sure what is. Maybe he was all doing it just to find out who and what he is, but those are some pretty violent methods, no?
@ 15 DemetriosX
But that’s precisely why he should be evil. He was created for the sole purpose of defeating Holmes, and the little we know about him is that he’s an evil crimelord. Did I mention the air rifle in his cane?!
@ 16 CaitieCat
At least we didn’t have to watch Geordi offering Data a “rifling” lesson, eh? Sadly, they made Geordi more of a jam Watson. I was so impressed that Geordi correctly identified strangling as the cause of death (he’s not even a doctor!) and then Data goes and upstages him. Jerk.
@17 Torie
But there’s no reason for the computer to stick to the ACD canon. Geordie says “in the style of Sherlock Holmes” not “in the style of Conan Doyle”. The computer has 400 years of pastiche to draw from, which would probably drown out the original works anyway. (Even Data is subject to this apparently, since he uses a pipe with a curved stem rather than the canonical straight-stemmed pipe.) Much of the post-Doyle fiction that involves Moriarty treats him as more or at least other than a straw man nemesis for Holmes. A lot even suggests that Holmes’ view of the relationship is very one-sided, even going so far as to say that Moriarty isn’t even a criminal, but the Holmes brothers’ old math teacher and it’s all a psychotic delusion on Sherlock’s part.
@12 Ludon
A spirited defense of an episode that is not without an admittedly clever and imaginative central concept.
It still revolves around the computer taken a poorly framed voice command and executing it to the power of 1000. Do children aboard the Enterprise have voice command access to the computer? Could they not, for example, order a replication of crayons to the power of googleplex and burst the ship with them? Could not Worf say, “I could eat a horse” and the computer would slide that mess out on a food tray?
Unless the ship designers are, y’know, Barclay, they would simply prohibit the ship’s entertainment system from overwhelming its vital functions the way the GPS on your car, in its malfunction, cannot affect the steering or engine.
The other difficulty is the crew too readily accepts the wonder of the replication of new life. The computer spits out a character and they’re prepared to draw up certificates of citizenhood in the Federation. They dance around like Victor Von F, “It’s alive! It’s alive!”
Their gullibility flies in the face of the Holmes maxim:
“When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.”
The crew embraces the impossible. What they don’t consider is the improbable, that the computer has simply created a simulation and illusion of sufficient complexity they don’t fully understand it.
@17 Torie Atkinson
Wasn’t the air rifle cane in fact the property of M’s associate, Col. Sebastian Moran? Guns seem rather messy for the master mathematician.
If Geordie asked the computer to replicate, with absolute accuracy and realism, the physics and effects of Chernobyl, would it do so?
The captain would be well advised to weld the doors shut forever on the holodeck.
Coincidentally I’ve been immersing myself in Holmesiana lately (mostly the BBC modern-day show I admit although I still regard Jeremy Brett as the definitive Holmes) so the timing of this re-watch is apt.
I feel much the same about this episode as I do about the Dixon Hill material in the first season: I get the idea that the writers wanted to use these holodeck episodes as an excuse to work in some other genre, but then they don’t bother to get the genre right. TNG’s version of the noir detective story is especially dreadful. The Holmes material is better if only because at least Data’s actually into it (compare Picard, who seems to regard the Dixon Hill simulation as an excuse to doze off in a fedora) but overall it’s just so…tepid. And too reminiscent of those awful, semi-comical Basil Rathbone pastiches.
You have to admit that Geordi’s advice to Data about you can’t have victory without the fear of failure is advice that a lot of fanfic writers should heed…
#16, @CaitieCat: “Also, if it went by the majority of fictional references it can find by then, Moriarty should be panting for Holmes’ hot body.” Or trying to steal Holmes’s boyfriend John :p
At least this episode (and much of season 2) is already a huge leap forward from season 1. Not that this episode is particularly good but that at least I can see a good leap in writing progress from the junk of season 1. So while this episode is no where near my favorite, at least when I see it on TV I’ll probably watch it over many other offerings if I’m bored.
Couldn’t say the same with season 1.
@19 Lemnoc
If we apply your logic, couldn’t we conclude that Dr. Pulaski was right? The crew, and much of Star Fleet, are accepting the impossible while ignoring the improbable that Data is just a simulation of life?
I doubt that a child on the Enterprise would be able to order enough crayons to fill a meal tray, much less enough to fill the whole ship. As with the electronic ID tag which controlled where I could and could not go when I worked for Charter, whatever ID and communication system is used for the children and other family members on a ship should involve a ranking system that limits their access to the systems. That the boy who took the shuttlecraft in Coming Of Age was able to do so suggests that learning to fly a shuttle craft had been part of his schooling so he had access to those systems. (I assume that all his training so far had been in open space and he was reaching beyond his training and knowledge is trying to enter an atmosphere and land on a planet.) While that boy wouldn’t have been able to do what was done here, La Forge, as Chief Engineer, would have the authority to call up the resources needed to handle his request.
The GPS system is not a good comparison here. Even when it is designed as a factory installed device, the GPS is still basically an add-on device. The holodecks are integrated into the ship’s computer network/system.
@17 Torie
As I see it, the risk of failure doesn’t have to mean a threat to the ship and the crew. The risk of failure could be a threat to Data’s personal image. Data sees himself as a good being – even more so after his encounter with Lore. If Moriarty is struggling for something that is right but Data doesn’t see it and does all that he can to stop Moriarty, wouldn’t he be running the risk of seeing his goodness in question. I can see something like this being the kind of trap the computer would set to make the challenge worthy of Data. How might Data react after failing by taking Moriarty down?
While Moriarty’s actions do call his intentions to question, I do keep in mind that he is still trying to reconcile all that he knows with all the baggage that had been assigned to his character.
@Ludon #24
re: access limits
I totally agree that there should be some kind of security controls on board the ship that prevent people from using devices above their privilege level, but that unfortunately doesn’t seem to be the way the thing’s designed. If the Meat Popsicles from a few episodes ago could intercom the captain on the bridge, presumably there wouldn’t be much in the way of limits for anyone else (let alone allowing a hologram access to the computer via the arch command).
I accept the idea that Geordi has access to the resources to do whatever he wants, but seriously, shouldn’t he have to use sudo first before the computer executes a command that creates a malicious AI capable of taking over the ship from the rec room?? It’ll disable all failsafes with no san check? We feeble 20th-century dwellers would’ve designed our systems better than that, unless the Enterprise is running Win95.
I think it would’ve been interesting to take this episode toward a case where Data is better off failing than succeeding. Sadly, I don’t see Pollyanna Roddenberry giving that the green light, alas.
All this talk of safety features is making me recal a science-fiction novel I once read. In this universe there teleport gates, step into one, you balsted down to sub-atomic particles, and the other gates makes an identical you from its store of particles. A bunch of characters find themsels stuck on a makeship spaceship with limited resources, so they start throwing spare members into the gate toi’store’ them. (it’s not connected to a matching gate.) The idea is once they get out of danger the’ll just reconsititute their friend. Howevere when that times comes it seems they reached the buffer max memory and were simply overwriting people, one after another. I wanted to throw the book across the room. My Xbox won;t let me overwrite a saved game without a dialog box, but you can erase people without any warning. bad design, not realistic in a world with lawyers, author fail!
@Ludon 24
I agree, it is a matter open for discussion. But IMO it appears to be a discussion that happened sometime in Data’s past. It doesn’t seem probable that Data was given the authority to issue commands and to order personnel (he is third in command of a starship with over 1,000 people aboard) without someone sometime in the past thinking this through.
Or did Pulaski just show up and sneer, “Did anyone notice this Freak is not alive?” and everyone just blinked and said, “Wow! Now that you mention it, heck, you’re right! Gee, never thought about it before, but that’s a headscratcher! Sure pays to look at things with fresh eyes!”
And I will also grant that the Federation almost by necessity has to be fairly non-prejudicial and fairly open minded about what constitutes a lifeform. But here you have the computer spontaneously spinning something up and Picard ready to issue it a passport. There is no sense here that this “lifeform” went through any generative, evolutionary growing process that even Data apparently went through.
Okay, my trivial addition to this conversation is that I was never bothered by Data being able to walk out with paper. I always figured that some props in the holodeck would be “real” manufactured/replicated items. Otherwise how could Picard get pelted with a snowball, or Wesley leave a sopping wet trail on the carpet, or Pulaski drink tea with Moriarty? I would think that some small items could be removed and kept as souvenirs ( “I nearly died on the holodeck and all I got was this lousy T-Shirt” ). I know, I know…we’ve evolved beyond the need to acqiure possessions and all that, blah, blah,blah…I never bought into all that. Why bother to decorate your quarters then ( or keep a flute, or a Shakespeare folio around eh, Captain )?
Anyway, my big problem with the holodeck ( besides it’s constant use as a faux jeopardy device ) is that it’s scope was too broad. It’s parameters were ill-defined and often ignored at will. In the first episode we see Data hit a nearby wall with a rock and it pixelates. Yet if the objects are replicated, and the illusions so sophisticated, shouldn’t the computer predict the trajectory of the rock, dematerialize it and create the illusion that the rock has kept going?
The biggie for me has always been the conception of space in the holodeck. When we see it in non-operating mode, it’s just a room…sometimes a big room, but a room nonetheless. In this episode, real players separate on a foot chase through the streets. How can a finite room create the illusion that one person is standing still while the others go on a foot chase some distance away, keeping everything perfectly in perspective and proportion from everyone’s point of view? To me, that’s harder to swallow than the principle of the transporter. I may not be an expert in future technology ‘n’ physics ‘n’ stuff, but I do know you can only walk so far and cram only so much into a room ( believe me I’ve tried ). That sort of “it’s bigger on the inside” thing works much better on ‘Doctor Who’ than “Star Trek”.
I’ve never been a fan of the holodeck. On paper, as a treat for the crew, it sounds good. But in practice on the series, not so much. Face it; if any current day amusement park ride endangered people with the regularity of the holodeck on the Enterprise they’d be banned everywhere. “Scramble my atoms and reassemble me somewhere else? Sure. No problem. Step into the crazy fun house of death? Screw you!”
@21 Lemnoc
If Geordie asked the computer to replicate, with absolute accuracy and realism, the physics and effects of Chernobyl, would it do so? The captain would be well advised to weld the doors shut forever on the holodeck.
NICE.
@28 Dep1701
These things bug me about the holodeck too. I’m willing to forgive them a little for the earlier episodes, when they hadn’t quite thought through the logistics of the technology, but the fact of the matter is, the holodeck works more like magic than anything else on the series and I don’t think they ever lock it down as something remotely realistic, even through Voyager’s run.
Something else to consider… Given the loose instructions Geordi gives the computer, an adversary capable of defeating Data could be a simple matter of programming a holodeck character who knows about his off switch, or giving it a weapon and disabling the safety features so that it can destroy him. The computer interprets his instructions as needing to make a character that is his intellectual equal, which still shouldn’t require it to be sentient. A computer can beat a grand master at chess without being self aware, after all. Maybe the problem isn’t Geordi being vague, or the system’s lack of security checks, but an actual, malicious attempt by the computer to kill all humans.
#21, @Lemnoc: I didn’t actually remember this until this morning, but do you remember that Rashomon-ripoff TNG episode, the one where Riker is accused of murder? Something created on the holodeck was actually able to cause damage (and a death, as I recall) elsewhere on the ship. So…yeah, Chernobyl would probably be a bad idea.
If the holodeck can create sentient, self aware human beings at the drop of a hat, first of all it’s completely unethical and should be killed with fire, and second it removes any kind of uniqueness that Data is supposed to possess. Heck, Moriarty even has emotions. Data’s second-rate AI at best.