Monday, April 21, 2008
Goodbye, Button Mashing
Next on the roster for manipulation of this sort (here I always thought it would be cyborg limbs) it's video games! And you won't even need surgery.
In this newly designed device, a series of sensors placed on a special headset will record your intentions, emotions, and even (ugh) facial expressions in order to help you better interact with a virtual world.
Particularly nifty is how they got the device to figure out what you want -- it's surprisingly intuitive -- they just put people in the same situations over and over again and tried to figure out what their brain waves had in common.
So, getting a game to know which direction you want to walk in just by thinking about it is pretty awesome, but you may be surprised to know that that's not what this is even for. So what is the darn thing supposed to do? And why the hell does it want to know your emotions? The answer is crazier than you might think.
So it can cater to them.
Homing in on these revealing brain waves allows the EPOC system to quickly deduce a player's emotional qualities and react to it by, for example, changing the music of a game in real-time to match the user's tension or throw in more villains in case a player seemed to get bored of a certain world.
This three hundred dollar headset is going to read your mind just so it can throw more monsters at you when you're bored? Oh, come on! I want a full package. Supposedly the full body movement stuff is still in the planning phase.
I get the impression that primarily this headset does the realistic-expression-mimicking thing for online avatars, so it's primarily a tool for extra-realistic online social interaction. It has thirty different presets that it tries to select out from the signals you give it, so it's not an exact replication of the one-of-a-kind look on your face at a particular moment, but it's still probably going to be closer to your real expression than, say, your WeeMee on AIM.
Look out, Second Life. Things are about to get a whole lot creepier.
Tuesday, April 15, 2008
Decisions
In an experiment that tested people's brains while they were making a (notably pointless) choice, scientists were able to monitor unconscious sectors of the people's minds to figure out what decision they would make seven seconds before they thought that they actually decided. In other words, even though people consciously confirmed their choices as ones made purposefully, the readings suggested their unconscious minds sorted things out and then their conscious minds just went along with the committee.
So just what does that mean, exactly? It doesn't mean we have no free will, necessarily. Just because our conscious minds have a monopoly on our sense of self doesn't make them any more legitimate than the rest of our brains. Well. You know. I guess.
To be honest, even the scientists involved in this experiment got a little squirmy about it. From the article:
Haynes and colleagues now show that brain activity predicts -- even up to 7 seconds ahead of time -- how a person is going to decide. But they also warn that the study does not finally rule out free will: "Our study shows that decisions are unconsciously prepared much longer ahead than previously thought. But we do not know yet where the final decision is made. We need to investigate whether a decision prepared by these brain areas can still be reversed."
So, we learn for sure from this experiment that stupid decisions such as choosing a hand for button pushing are prepared in advance for our conscious minds to receive, but we don't know for sure whether that choice, once prepared, couldn't be rejected consciously if we had some legitimate reason.
We also don't know if consciousness plays a part in more important decisions that might bring morals, emotions, or anticipation of future events into play in a complicated way. Decisions that might have consequences.
But it is one of those spooky suggestions that point towards a world where we might only be passively watching ourselves and trying to make sense of it all.
Maybe. To me, it's more likely that it's just a sign that we're a little bit bigger than the conscious selves that float in our heads analyzing ourselves and the things around us. There's a lot going on behind the curtain. The hidden stuff belongs to us too, though. Don't panic.
Monday, April 14, 2008
The First Animal
It might change a single species in any of these ways, but at the same time, species that have opposite traits could be thriving in the same environment. Sometimes traits that are bad for survival are preserved, buried among traits that seem to balance them out.
So, scientists have found that the comb jelly is older than the sponge. It's significant because the jelly is a lot more complicated than the sponge. This means that nature may have back-pedaled after coming up with the first animal, removing many of the bells and whistles of the comb jelly and its descendants, such as tissues and nerves, in order to create the sponge, a cleaner, simpler creature that retained the ability to survive, and consequently went on existing long enough for us to become its contemporaries.
What does this say about the way the world works? Well, for me it brings to mind Planet of the Apes and other creepy scenarios where humankind "devolves" into a more primitive state, as the ability to create high tech weapons with our superbrains becomes more liability than advantage and takes us to the brink of extinction.
Interestingly (or terrifyingly, depressingly, if you like) the capability of humans to contemplate evolution in itself has caused a lot of destruction, if you look at the pseudo-science fueling the genocide and racism of World War II. Ideas and nukes can both be dangerous, especially put together.
Currently, humankind's unique abilities have allowed them to dominate the planet essentially unchallenged, if you knock out the bugs and the microbes and other things that are more abundant than we are even if they can't strictly eat us or kill us if we have the right tools. But I'm not sure this is really what evolution means -- it isn't some machine aimed at the creation of a species with the capacity to dominate. It isn't really the survival of the fittest, either, just the survival of whatever survives. Anything and everything that survives.
Worms and ducks and whales and people and all that other stuff out there.
Tuesday, April 8, 2008
More Mad Science
British researchers say they have created embryos using human cells and the egg cells of cows, but said such experiments would not lead to hybrid human-animal babies, or even to direct medical therapies.
Dr. Lyle Armstrong of Newcastle University presented preliminary data on his work to Israel's parliament last week, Newcastle University said in a statement released on Tuesday.
They said they had hollowed out the egg cells of cattle and inserted human DNA to create a growing embryo. The hope would be to take it apart to get embryonic stem cells.
Full article here.
Nothing funny going on here, we just made HUMAN COW CHIMERAS. But don't worry! They can't grow up! Ha ha ha... So, okay, it's not quite a cowperson. It's just a human stem cell encased in a cow egg "shell" of sorts that can't develop too far before it dies. Super!
The idea is to practice with these cow eggs without wasting human eggs, which are understandably in high demand for creating, well, babies. Interestingly, this crazy shit is all part of the process involved in learning to make stem cells do what we want so that we can have awesome things like total limb regeneration and mending spinal cords.
But I do still feel a little bad for the poor little freak cowpeople. They're so pitiful they can't even become real organisms. ...And if they did, man would they be pissed.
(image source: starcostumes)
Sunday, April 6, 2008
Human Limb Regeneration!
(image by: Sbocaj)
For a while now we've been, as a species, looking jealously at salamanders wondering how they can regrow chopped off limbs. I think it's safe to say that most of us assume that, though awesome, the complete regeneration of lost body parts isn't really for humankind.
Maybe one day, with nanomachines or cybernetics, we'll find a way to tell robots how to build us new arms and legs. But we can't do it the way that salamanders do it. Even healing simple wounds seems to almost defeat the human body's level of ingenuity, leaving pronounced scars made of tissue that doesn't match or serve a purpose other than sealing.
That view, however, is changing! Scientists are starting to learn more about the process salamanders use to regrow limbs, and it turns out, it's not quite as far from our own capabilities as we might have thought. Humans have the capacity to regrow "limb buds" during embryonic development and even as adults can still regrow their fingertips, if not the joint.
What's so special about the salamander healing process, that lets them regrow exactly the portion of the limb that they're missing, while humans are left with nothing but a scar and a stump? It's called a "blastema," a bunch of cells that behave much like stem cells, and are capable of regeneration. They apparently form from the fibroblasts around the wound site, which for humans just create clumps of functionless extracellular material to fill in the hole.
Although there are other complicating factors, the main reason animals that can regenerate early in life can't do it as adults seems to be that something called a "Fibroblast Growth Factor" circuit, or FGF, gets turned off at some point in the development of the organism. By turning it back on in frogs (which can regrow limbs as tadpoles) they were able to create amazing tragic frog monsters with malformed limbs sticking out of places they don't belong.
They're still working on it.
Tuesday, April 1, 2008
Windows
The future of computers and television is see-through. And bendy. Innovations in nanotechnology mean that the screens of the future might just be pretty weird. The most obvious advancement scientists have in mind is to create a readout that displays directly onto the windshield of a car, giving you access to maps and other useful information.
But there are other ideas, too, like reusable newspaper you can roll up and swat people with. Even your laptop might be a candidate for some bending and folding. You might be able to program your wallpaper to different high-resolution displays, or read all the books you want from just one sheet of paper. The full-wall-TVs of Fahrenheit 451 are almost inevitable at this point.
Still, I see a bright future for modern art.
Monday, March 31, 2008
Learning Robot is "A Lot Like a Puppy"
First off: apparently, when artificial neural networks are used in AI, they have to be carefully limited to keep robots from doing dumb things. I'm pretty sure that at this level this just means controlling the number of completely irrational solutions the robot comes up with for a problem on the level of "There's a wall in my way," but looking ahead to the future, I like to think that this could mean one missing line of code could stand between peaceful coexistance and robots eradicating humanity.
On a more serious note, the point of the article is that through cleverly mixing artificial neural networks with the traditional "pre-programmed" approach to AI, they found something that works a lot better than either one works on its own.
Here's a quote, because this is just that awesome:
Working in the EU-funded COSPAL project, Felsberg’s team found that using the two technologies together solves many of those issues. In what the researchers believe to be the most advanced example of such a system developed anywhere in the world, they used ANN to handle the low-level functions based on the visual input their robots received and then employed classical AI on top of that in a supervisory function.
“In this way, we found it was possible for the robots to explore the world around them through direct interaction, create ways to act in it and then control their actions in accordance. This combines the advantages of classical AI, which is superior when it comes to functions akin to human rationality, and the advantages of ANN, which is superior at performing tasks for which humans would use their subconscious, things like basic motor skills and low-level cognitive tasks,” notes Felsberg.
What's really interesting is how they handle the problem of setting a robot free to learn on its own. Essentially the robot has no innate criteria for making decisions, so how does it know when it's done something "right?" Well, we tell it so. A human operator has a device with two buttons.
The good boy, button, and the bad boy button.
Wow. Can you imagine when this becomes a toy, or an ethical issue?There's actually a video game that functions similarly. I guess it'll be harder without a magical fairy.