Group Dynamics – Who's In, Who's Out.


Many animals live in groups and primates are generally thought to be interesting in terms of human behavior. Not, of course, that we fit into any of the observed patterns, but behavioral scientists have fun looking for parallels.

When we look at larger groups than small family groups, we see social structures based on a dominance hierarchy. The dominant male, and it is the prerogative of the male, is at the center of the group with members spread out to the periphery. Their spatial position is a good indicator of their group status.  The normal model is that the subordinates run away after loosing a fight, or at least a peremptory challenge.

Evers et al in this week’s Public Library of Science are exercising a different model. In this case, subordinates in the group avoid more dominant members in order to avoid a dust-up (1). This well-mannered approach leads to a spread of individuals with the least dominant at the periphery and individuals getting more dominant as they get more centrally located.
This leads to a more structured group and doesn’t appear to be particular surprising. Of course, everybody won’t always be polite all of the time and some running about to avoid a thrashing will occur so that the structure is a little fluid.

Much more interesting though, is the result for really large groups. In this case the politeness leads to a well spread out population with the formation of smaller social groups among the plebs.

Not that these sub-groups are seditious and ready to start a coup, rather they are harmless local “clubs” which make their members happy. If you end up at the periphery of one of the peripheral sub-groups, it will take a degree of self-delusion to feel good about your life style. It might be time to emigrate. Easy to do in the computer model of Evers et al, but not so simple in real life.

  1. http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0026189

Mooning


Credit:  NASA/GSFC/MIT/SVS


We have been mapping the moon for a long time now, but the latest results from NASA gives us more detail than ever before (1). For the past year NASA’s lunar orbiter has be steadily going round and round the moon shining a light into every nook and cranny it can see. It’s by no means finished yet and will be plugging away for the next two years as it goes round and round in its polar orbit.

The satellite is equipped with a laser altimeter firing laser pulses at the surface. The single pulses are split into five so that when they are reflected back, their different arrival times indicate the distances that they have travelled. So if they are reflected from a high point, the light gets back sooner than the beam from a slightly lower point from an adjacent part of the surface.

The spatial resolution is around the 100-foot mark and will be better in the polar parts as these are directly under the orbital track. The images are stunning. The coloring is used to indicate the height of the features, so red is high and green is low. Measurementwise, the height resolution is around 3-feet.

Why do we earthlings care about such detail? When we get back up there to set up mining camps to exploit the resources, we will need to know how easy it will be to get machinery to where we want to go, best route, and where the most shaded and coldest spots occur as there might be some ice around. That would be particularly handy. Not just for our gin and tonic, but for fuel.

Above all, it will be nice to have a hi-res picture before we’ve had a chance to spoil it, don’t you think?



  1. http://www.nasa.gov/mission_pages/LRO/news/lola-topo-map.html

Left-Brain, Right-Brain For Moral Judgement?


We hear a lot about whether we are “left-brain” or “right-brain” people depending on our occupations and other activities. The cool analytical thinkers among us are the left-brainers while the artistic, creative types are the right-brainers. Of course, some of the time we make fools of ourselves and we could call ourselves “no-brainers”, or at least we feel like that, usually when we are battling in vain with some new software update.

The Guardian Improbable column  (1) drew my attention to a paper published last December by Cope et al in the Frontiers of Evolutionary Neuroscience in which they come up with the answer to an unexpected question, namely “what half of the brain do we reserve for processing immoral stimuli?”

My prejudices immediately went into overdrive, but of course, have zero validity when there are systematic experiments to pore over. They carried out three studies. The first two were based on reading statements with their heads in the big magnet and the third had pictures. Now not all the statements or picture were of immoral acts; some were neutral or there wouldn’t have been a benchmark.

In the first experiment, they had fifty 25-year old guys who were shown statements about them doing things with their sisters. These ranged from bad (like incest and murder) through gross to harmless. The second set had twenty-three 30-something men and women who had to judge whether the acts described were wrong or not. These covered a wider range of the type of issues that exercise many people in society currently. The picture group were back to the 25-year old age band, but were both men and women. The “naughty” things here did not include sexual immorality, but rather drinking and driving, house breaking and the like.

Well, they got a lot of colorful pictures and we very clearly just work with one hemisphere when we’re making judgments on other peoples moral behavior. Which side do I hear you ask? The left hemisphere is the one when we are making those snap character assessments of what that couple is doing in the back row of the movies.

Not surprising really I suppose as we are bringing our cold analytical judgment to bear. However, I’m left with wanting more. Which hemisphere are you exercising when you are practicing that immoral sexual licentiousness, drinking and driving, or maybe house breaking?

Maybe we’ll never know. It’s hard to indulge in those activities whilst wearing a large magnet.

  1. http://www.improbable.com/
  2. Cope et al, Frontiers in Evolutionary Neuroscience, (2010)                                       doi:10.3389/fnevo.2010.00110

There, There, Never Mind.


Yesterday’s post mentioned the rapid visual assessment of “attractiveness” from still or video images. Rapid assessment of an individual's mood is an important facility for our survival as well as our happiness. Reading body language is a skill that is deeply embedded in our past development, even though our more cerebral evaluations make us doubt our “instincts.” On the other hand, our dog is very sharp at reading the signals and if it growls, maybe we should listen.

The most recent variant on the instant assessment front is covered in a BBC report of a study in the Proceedings of The National Academy of Science by Kogan et al (1,2).  Here their “lab rats” watched silent movies of 23 couples. The main movie had one half of a couple telling the other about their hard times and the lab rats were allowed to watch for a generous 20 seconds after which they came up with a score for prosocial tendencies.

So far, so good, but where’s the real test here? Well, the film stars had their DNA laid bare. Specifically their OXTR (oxytocin receptor0r) gene, or at least parts of it. The G or A alleles of the gene were the focus. The BBC has called it the cuddle chemical (1) so the researchers are matching up the allele combinations with the perceived empathic estimates.

The top rated empathizers all had a pair of G’s, so GG means you’re one of the good guys. An AA or AG combo means, well, maybe you’re not a good listener and won’t offer a handkerchief and a cuddle, but more likely pithy advice like “get over it”.

It’s a rather scary thought that a glance at your facial expression lays bare parts of your genetic make up like it’s been tattooed on your forehead. I wonder what other behavioral/gene/body language patterns will show up next?

  1. http://www.bbc.co.uk/news/health-15693508
  2. http://www.pnas.org/content/early/2011/11/08/1112658108

Hot Or Not Decisions


In these days of Photoshop, the old phrase “the camera never lies” is no longer to be heeded. Except, perhaps, before we set about playing with pixels. At that point we stare at the screen and ask ourselves why the bathroom mirror is so much kinder, even if we can’t describe it as flattering.

Images, whether Photoshopped or not, are an important feature of all on-line dating sites. OK, there is the list of interests etc., but the photo is the focus.

Apparently, there is a strong suggestion in the literature that there is a marked discrepancy in the attractiveness ratings of individuals based on video clips compared to static pictures. The suggestion is that the video images provide greater depth and richness to our assessment of “like ” or “dislike”.

Now, of course, you need a whole studio facility to modify a video sequence so maybe the richness can be outweighed by reality. Rhodes and a large team from the U of Western Australia decided to put this to the test and get a definitive answer as to whether static and video images were at odds in depicting hot chaps (1).

In their experiments, the lab rats were 58 females in the 17 - 35 age band and they were asked to rate static images and short videos of 60 heterosexual men also in the same age band. The male attraction rating ranged from 2 to 8 out of ten, but there was absolutely no difference between the static image assessment or the video image assessment.

The team also rated the mating success of the guys and found that it positively correlated with their attractiveness rating. In other words, hot guys scored.

The final conclusion was that the videos didn’t do anything in terms of assessment difference, and this was probably due to the girls making an instant appraisal of how hot the guys were. This of course fits in nicely with Gladwell’s conclusions in his book “Blink” in which he proposes that we make our minds up very quickly with the minimum of information. Probably an evolutionary trait when we were deciding if that big new beast was fierce or friendly.

  1. http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0026653
  2. M. Gladwell, “Blink,” Little Brown & Co, New York, 2005.

Night Owls


As we grow up our sleep behavior changes. In our teens, we need lots of sleep and getting us out of bed in the mornings is not an easy task. This is not made any easier by the tendency of the young to be semi-nocturnal – a habit that can last until they should all know better. Of course, way back in our evolutionary infancy, we were nocturnal little primates doing our best not to get swallowed up by the large predatory beasts out there.

As the ancestral primates of 50 to 60 Myr ago came out of the dark, they developed a wide range of social structures running the gamut of solitary individuals through single male polygamous groups to large social groups. There have been a couple of theories as to how the social groups developed. Shultz and her colleagues have published the results of a computer simulation that seems to fit the bill better than previous pictures (1).

The benefits of daytime foraging are counterbalanced by the increased risk of being spotted and ending up as someone else’s breakfast.   The simulation predicts that the predation risk would be lessened by loose aggregates of the emerging primates. Further cohesion into large cooperative social groups would further enhance this benefit.

Better foraging results and better protection against predation then leads to more babies and expanding populations. So it seems that social living resulted from giving up the night shift and didn’t appear to have anything to do with looking around for sexual partners.

It seems somewhat ironic that after almost 20 Myr of evolution, our younger generation should seem to be so in love with the nocturnal lifestyle.

  1. www.nature.com/nature/journal/v479/n7372/full/nature10601.html



Metabolic Blues


It seems to be unfortunate, but inevitable, for many of us that as we get older our weight increases. Somehow it matters little how much we stress over our BMI, eat healthy (or try), and take regular exercise (or try), our shape never gets back to how we fondly remember it. We have changed from that lithe, spritely figure in those now fading photographs to taking on a more fruity appearance – more apple or pear shaped.

Unfortunately the extra weight puts us at risk for a variety of problems known as metabolic syndrome, which is shorthand for a whole lot of things going wrong. Now Kang et al took a group of 565 76-year old Koreans and gave them dual energy X-ray absorptiometry scans (1). These scans are normally used for bone density measurements, but can also yield fat densities around the body.

After a great deal of data analysis the biggest risk factor for metabolic syndrome wasn’t the participants BMI values. The clear troublemaker turned out to be the amount of android fat.

Android fat wasn’t something that I had been focusing on in my workout/eating plan although I clearly should have. I have been stuck with BMI and will now have to become more sophisticated. The android fat is the fat that makes us look “apple shaped” and, of course, means that our waist measurements are more important than our BMI.

My smartphone app is stuck with BMI. It’s time that our Android phones had an app that logs our waist measurement as a guide to our Android fat level and gives us advice on diet.

  1. http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0027694