Saturday, December 17, 2011

Attention and the web

Fittingly brief article on attention and technology:
Untangling the web: attention | Technology | The Observer

References a book on the typic, with a collection of pieces from the likes of Steven Pinker etc., but also points out jury is still very much out in terms of hard evidence (at least for the moment).
But I do like her closing comment:
" Over the last year I've insisted again and again that the web is not doing anything to us; that it merely presents us with a mirror that challenges us to face ourselves. The only way we can untangle ourselves from the web is to pay attention to this, and to reflect on what it is, in the 21st century, we do to ourselves and to one another."

I personally don't concur that the web is not doing anything to us, since I think any such significant change in how we organize, evaluate and manage not only our lives but our self and social image, must result in some cognitive changes, just as countless other things do. But I agree with the idea that what really is required to deal with this is not just better analysis and understanding of the technology, but of ourselves as well. As the title of Charlie Brooker's new TV drama  suggests, it is a "black mirror", a glass through which we see ourselves, albeit sometimes darkly.

don't regret regret

Nice little talk on the subject of regret on TED : www.ted.com/talks/kathryn_schulz_don_t_regret_regret.html


I especially liked the phrase 'control-Z culture' (as in ctrl-z, the computer 'undo' command), since there is some truth that in the modern world we do desire, and even expect, to be able to 'undo' mistakes. I think there is a way in which the often awe inspiring 'progress' and development (in technology, science, wealth etc.) which surrounds us leads to the subconcious belief almost that everything is always possible, and nothing is ever permanent, or at least the bad stuff anyway.
 
While such optimism is of course often a good thing, there is an element of immaturity to it, since it means we at some level always appeal to some global 'mother' to come and make things all right again. But the harsh reality of adult life is things happen and stay happened, and maybe our culture is less adept at acknowledging that, and hence dealing with it.

What was also interesting in the talk was the list of areas in which people have the most regrets, with the top 5 being : Eductation (32%),  Career (22%), Romance (15%), Parenting (10%) and Self (5%), and I think this is unsurprising, but also informative, since these, are all domains in which we envisage goals for ourselves, rather than experience them.  We want 'to have' a better education, career, be better parents, as a sort of extension of our selves, but ultimately they are only means to an end - so why not regret the actual ends? Romance may be an exception, since this is something which is perhaps an end in itself (we live relationships, not just have them) , but when viewed as  regret, then maybe it is also rather ethereal and 'idealistic' - since we are wishing for what never was, which might never be as we expect. This I think strikes at a fundamental conflict in our lives - we dream of being certain kinds of people, generic types of levels of career, education, and even partners, but we live as specific individuals. A professor, or a CEO still lives through the daily actions we all do, and it is these actions that ultimately constitute their lives. The higher labels only provide means, not ends.
So I think it is unfortunate that these top our regrets list, because even though we may care about them more, maybe they ultimately matter less, They are about the self image we have of who we want to be,  not who we actually are. Though of course another point is to just dea with the brute fact of this tendency also means we need to consider decisions in these areas much more carefully, since they are the ones which will haunt us, rightly or wrongly. And there is some consolation in that, perversely, the things we fret about most, which might actually have most tangible effect on our lives, the financial worries, the minor social worries, are actually the things we will forget about sooner.
One final point from the talk which is worth mentioning, since forewarned is perhaps forearmed : we regret the things we just miss more than those we had less chance of achieving. Missing a plane by 3 minutes feels worse than missing it by twenty. The reason seems to be that in the closer case, we can imagine how other actions could have made the difference, and this imagination is an important element of regret. Knowing this, maybe we can be more rational and handle it better. As the ctrl-z idea shows, the point is not to banish regret, but to deal with it.

Wednesday, December 14, 2011

Study indicates no news is really good news, or at least better than FOX news

While it goes without saying that there are serious questions about Fox news' journalistic balance, it could be assumed that even biased opinions on reported events would raise awareness about those events themselves. Unfortunately this might not be the case, as a study by Fairleigh Dickerson University suggests.

In response to questions about middle east events, people who watched FOX news were less able to judge how things had actually turned out, than people with no news source at all.

Since naturally this has prevoked a response (albeit the ones I saw were mainly of the blog with US-flag-and-rattlesnake-logo variety, which does indicate a certain disposition to start with) it is worth checking out the original survey results themselves, available here on the university website.

This should help counter any superficial rejections of the study's methodology. For example, said rattlesnake website claimed that the figures presented didn't show FOX news as the ONLY news source, but only perhaps as one of many. The logic being that then FOX news couldn't be solely blamed. Alas this blogger probably knows less about logic and statistics than actual rattlesnakes, but maybe a simple analogy would help explain the situation. If I put my hand into 3 boxes,  with rabbits and hamsters, rabbits and rattlesnakes, and rabbits, rattlesnakes and hamsters, respectively, and get bitten in the last 2 but not the first, then it doesn't take much of a jump of logic to know which pet not to get my kids.

The main point to be drawn from this is I think not that FOX news is a terrible news source (this is more obvious than the pet question), but that its style of news is not just uninformative, but actually detrimental, which is not something even I would have expected.  More studies would be needed to tease out exactly why this is the case, but several possibilities come to mind.

Overall tone overrides the message
Just as the imagery in adverts etc. can be shown to override any accompanying (and even contradictory) verbal message, it could be the overall tone of FOX news drowns out any factual reporting. Maybe  the average viewer is aware of US diplomatic battles with Syria, and so assumes any revolt against Assad has succeeded (with God, or at least Uncle Sam on their side how could it not?), whereas there is no such preconception informing the Egyptian uprising. Indeed maybe something like Obama's famous speech in Cairo linked him to Egypt, and hence any average FOX-viewer prejudice against him would feed into other questions.

The unmentioned better than the dismissed.
Another possibility is that since things like the Egyptian revolt (successful Islamic overthrow of dictator without foreign intervention) didn't fit the FOX narrative, it was so played down on the channel that its viewers also absorbed a distain and active un-interest in it, and hence were basically purely guessing when asked. People without an admitted, chosen, news source of course don't live in an isolationist bubble, and are still going to be aware indirectly of world events, just by channel surfing, or even seeing news headlines on paper stands etc. So even if they don't actively care, it may still be able to catch their attention momentarily, and thus inform them, raising their chance to answer the question, above chance. I.e. maybe FOX news turns people off certain things, and hence makes them more ignorant.

Suits us, but still means something
Of course, the reason this study has gone viral has to be admitted to be largely to the antipathy and disgust with which most people view FOX news, and so any scientific evidence confirming our opinions is going to be jumped on. But regardless of this, the bottom line is there is statistical evidence to show bad news is worse than no news, and this goes beyond FOX, but ties into the wider debate about how important matters are handled by society.


Tuesday, December 13, 2011

pink vs. blue, really boys toys?

The famous Hamley's toy store in London has decided to scrap its traditional separation of boys and girls toys and instead arrange the sections by type of toy. Apparently until now some floors were floral and pink for the 'girls stuff', and 'boys toys' were in their own levels , all noise and action.
This has prompted a couple of interesting news articles about what, if any, real gender taste differences there are. The bottom line is it's very hard to be sure either way, since nature and nurture are so intertwined, and initial environmental nudging can result in the same reinforced feedback loops as inate disposition. What does seem likely is there is no genetic female preference for pink (something which makes sense given that in previous times pink was considered a boy's colour!) but there is probably some tendency for boys to prefer vehicles and machines, and girls to prefer more role based toys, such as dolls. As someone with both a daughter and a son I can definitely provide ample supporting anecdotal evidence for this one, but despite this was still impressed recently to see in a documentary that this gender preferene is also visible in chimpanzees, even so far as the more masculine (as indicated by testosterone) the chimp was, the more likely it was to choose a toy car over a doll.
What follows are three articles from the Guardian. 

Pink vs, Blue - the initial piece that caught my attention:


Are pink toys turning girls into passive princesses  - an artcle by Kat Arny who argues against the matter

Out of the blue and into the pink  - by master debunker Ben Goldacre who turns his attention to spurious evolutionary reasonings

Tuesday, November 29, 2011

When people are paid by results their attitudes change

In reading about child motivation I have often come across the thesis that motivating children by means of rewards (basically payment) was not only not more effective, but might actually discourage them from the desired behaviour in general. I think the common example given is children being offered a reward to play with a certain toy (but one which would appeal to them anyyway) and then are monitored when they are left alone later with the same toy. The studies showed that children who had been provided with an incentive then didn't play much with that toy when the incentive was removed, whereas control children continued to do so. The interpretation was that the children now viewed playing with the toy almost as 'work', and hence had it was somehow excluded then from what there normal behaviour. The implication of course being that if such a reward mechanism is used, it will fail once the reward is removed, and does nothing to encourage the child to 'internalize' the behaviour, which is obviously what is actually needed.

But it is interesting to see that the same conclusion can apply to adults as well, and even more specifically, adults in the domain of economics, where rationality is still, even if bounded, largely assumed. Aditya Chakrabortty's Guardian article "When people are paid by results their attitudes change" is primarily focuses on how English rugby seems to have descended into selfish money grabbing, but references some interesting studies on this general phenomenon.

For example :
"Researchers now know a fair bit about how that shift works. Well over 100 tests have been carried out in which subjects are split in two and set some puzzles, next to a table with some glossy magazines. One group is paid $1 for each puzzle solved; the other does it for free. Time after time, the group working for nothing devote themselves to solving the puzzles. Those getting paid finish fast, then flick through the mags"

Or how when an Israeli daycare centre started charging parents for late drop-offs, the problem actually got worse:
"With tardiness now costing 10 shekels a pop, more parents should have turned up on time. But no. They came even later, because they saw the late pick-up now not as social embarrassment but as a service. And even when the centres stopped charging, the latenesses remained permanently higher. The introduction of a market norm had made its participants permanently more selfish."

In a society based on incentives and deterrents, this is something that needs to be taken seriously.

References (from Chakrabortty's article) :

A book review of "Not Just for the Money: An Economic Theory of Motivation" (Bruno Frey) which suggests:
"Extrinsic motivation involves external rewards, most usually associated by economists with the price system. Intrinsic motivation, or "behavioral motivation" as Frey also refers to it, comes from within. For both extrinsic and intrinsic motivation taken separately, the more we are motivated, the more effort we will put into a task. But research has shown that there may be situations when the two do not necessarily work together.
Frey invokes a familiar term to economists, ?crowding out? to describe the worst-case scenario. Crowding out occurs when the negative effect on intrinsic motivation of offering a monetary reward outweighs the positive extrinsic motivation. To use the labor market as an example, the result would be  reduction of work effort despite more pay. The profundity of this finding is that the result runs counter to the predictions of economic theory."


The paper on the Israeli daycare centre.
Abstract :
"The deterrence hypothesis predicts that the introduction of a penalty that leaves everything else unchanged will reduce the occurrence of the behavior subject to the fine. We present the result of a field study in a group of day-care centers that contradicts this prediction. Parents used to arrive late to collect their children, forcing a teacher to stay after closing time. We introduced a monetary fine for late-coming parents. As a result the number of late-coming parents increased significantly. After the fine was removed no reduction occurred. We argue that penalties are usually introduced into an incomplete contract, social or private. They may change the information that agents have and therefore the effect on behavior may be opposite than expected. If this is true, the deterrence hypothesis loses its predictive strength, since the clause 'everything else is left unchanged' might be hard to satisfy. "

Tuesday, November 22, 2011

The swype is mightier than the sword

Due to a broken shoulder I'm down to one working hand at the moment and as a result have been writing a lot on my phone using swype. Apart from finding it actually easier and more efficient than trying to stab away on a laptop, once again I find myself thinking about the method itself, and whether there might be a qualitative difference in using it. 

Writing with swype is actually very much like writing with a pen, in that one sweeps gracefully through the word one letter at a time, and I could imagine that this involves a different mental focus when compared to proper double handed typing (or even its two fingered virtual keyboard cousin). With typing, the word is pumped out almost as a unit, via an automatic burst of key strokes which are almost in parallel, and there is little thought or feeling of the individual letters that constitute it (which might explain my tendency in emails to mix up similarly sounding words, like "are", "our" and "or"). But when writing with a pen, or swype, one must deliberately spell out the word, letter by letter, and this must involve slightly different thinking? If only because one must wait until each word has fully resolved itself, and made that bit more of an impression, before one can fully turn one's attention to the next. Maybe it is even similar to the difference between the old way in which reading was taught, with the focus on the full word as a whole, and the new phonics method of teaching, which is supposed to be more effective. The fact that the two styles of instruction differ in effectiveness  indicates a difference in mental processing, between gulping the word down  as a block, and slowly sipping it in in chunks.

It must be possible to devise studies that could analyse this,  e.g. by comparing paragraphs written with the different methods, and it would be interesting to see if there is any variation in style, flow or word choice. For example, maybe the rapid fire of typing encourages free association, with words and ideas leading almost unconsciously to the next, and maybe in contrast the slow plodding pen (or swype) results in more deliberate thought. I'm not suggesting this is the case, but think our mental processing can be so influenced by subtle things, that such phenomena are definitely possible. And, since digital writing is now the norm, even being common place in schools etc. , replacing the centuries old tradition of inscription, then it is surely a topic worthy of study.  Online and in archives we are what we write, so it is important to know if how we write matters.
Of course, maybe we just haven't developed the same proficiency in swype as have in typing, and lost that proficiency through disuse in writing by hand, so even if mental differences now, maybe would converge over time. Again this is something which could be tested, for example by comparing types and swypes of subjects with varying skill.

However, I think it will always be true that swype/pen is slower and requires more effort, since the letters come out serially while typing is more parallel, and maybe this contains the significant difference. And this extra effort is I think a good thing -  I am reminded of the Ents in the Lord of the Rings, with their excruciatingly labourious language, which was so drawn and took so long to say anything with, that they only ever said something if it was worth taking a long time to say.

What also might be relevant is that writing never used be as natural  as speaking (even if not our biological default it is of course still in some way "natural" if we do it all, like wearing clothes), and maybe proficient typing, in being almost automatic, is actually closer to speaking. Which then raises questions about whether 'new' writing is different to 'old', and whether there are any consequences of this.

Or is this maybe changing, at least in certain domains? And should we be aware of this, and maybe learn when each style is more appropriate? The productivity of modern technology also means easier to generate more rubbish, and the sheer volume of comment and statement we can produce might dilute what actually matters, and result in reduced quality, particularly through lack of clarity. Because no  matter how effortless it is, speaking relies on accompanying elements beyond the words, such as tone, facial demeanour and expressive sounds,  which help convey subtlety and nuance, tools which writing must do without, and hence requires more care and effort to avoid misinterpretation.
Which is why of course rapid text/chat speak was accompanied by the invention of emoticons like smileys, but these are relatively weak aids given the complexity of human expression.

So while we should think before we speak, we must think even more before we write, and maybe slower, serial input methods further this. The pen is mightier than the sword, but the qwerty keyboard is double edged.

Friday, November 4, 2011

Cutting Edge : anonymous accountability?

Real not reality
I really like Channel 4's documentary series Cutting Edge, even though its subject matter is generally much more mundane than the topics that normally catch my eye (or DVR). Indeed it's probably because its topics are often so seemingly everyday and normal (for example comparing different pubs with the same name or the day of a bin man etc.) that it stands out, because it consistently manages to capture illuminating and interesting facets in seemingly ordinary situations.

As I've often said, I am definitely not a people person, and hate reality TV, but even I am often intrigued and drawn to the unusual individuals that it uncovers in every day life. These are 'real' not 'reality' people, and I have to admit there is something cheering about encountering such characters, and to be impressed with unexpected qualities of the average 'man in the street' (the classic example of which was the binman who quoted socrates in between emptying trash cans). What I think differentiates these people primarily from the type who appear on reality TV, is that in reality TV, they actively want to be on it, and are generally projecting a vain and vacuous image for that purpose, whereas in the documentaries such as Cutting Edge, the participants are simply observed going about their daily lives, and even if there might inevitably be some playing to the camera, it seems minimal.

Who guards the advisors
This weeks episode (Channell 4's webpage is here, and also seems to be available on youtube here) was on the problem (or promise) of the burdgeoning phenomena of online reviewing, and in particular Trip Advisor. On one side were the hotel owners who claimed they were powerless in the face of anonymous and unaccountable damaging reviews, and on the other were examples of the more prolific of trip advisor contributers, self-confessed fanatics who saw it as their duty to inform and warn others. 

Apart from the classic Cutting Edge characters captured (the 'basil fawlty' style hotel manager, or the nerdy reviewer who reads everything he writes out to his grandmother over the telephone) it was an interesting expose of a complicated area which is symbolic of the modern internet age. On the one hand it is definitely a good thing that the service industry can be constantly reviewed, and potential tourists can inform themselves, yet on the other there is the significant power without responsibility wielded by the commentators.

Personally, from the small set of examples in the programme, I would tend to be on the side of the hoteliers, since they were visibly hurt (emotionally and economically) by the reviews they were receiving, and did seem to really want to provide a good service. In addition, some of the reviewers seemed an unlikeable lot, obsessed with finding things wrong, or in the case of one particularly annoying woman, refusing even to debate on camera what she had written. But it was not as clear cut as that, since there was bad behaviour by some owners out of view, and some of the reviewers were cheerful and witty about their work, and more than happy to defend it in person.

What makes this interesting, is that I think it is an illustrative example of how we need to adapt to the new world order of an online culture of commentary. What I think needs to be accepted is that there is no going back to the days of a handful of 'elite' reviewers, and both sides will have to live with this new phenomenon. To do this, what I think is required is education and increased awareness on both sides. The hotel owners need to accept that there can now be instant, widespread and yet possibily groundless publication about any mistakes they make, but also the tourists themselves, who use Trip Advisor, need to factor in that the person making the review may not be reliable or unbiased.

For the hoteliers, now that they know everyone can write a review afterwards, they need to establish feedback mechanisms, or even simply ask, to allow visitors to vent any frustration (if only partly) without resorting to the net.

And for tourists, they have to realise that the type of person who writes a review is more likely to be either the kind of person who makes a hobby out of nitpicking, or was spurred to write by a particularly bad experience, and hence any collection of reviews will probably be biased to the negative. In addition, as studies such as the Stanford Prison Experiment have shown, anonymity is conducive to abusive behaviour, especially the exertion of arbitary power over others, which is exactly the power that online reviewers wield. As these dynamics become so prevalent in our society, we need to become more adept at recognizing and compensating for them.

At the end of the day, what I think is the real problem, is not anonymity, but accountability, or the lack of. Being anonymous may be problematic in how it enables irresponsible behaviour, but it is also advantageous, and not just for the extreme cases of people living in repressive regimes. The presence of malicious 'trolls' on the internet, and the damage they can inflict, means it is often advisable not to reveal ones identity, since online it is so easy to upset someone, and so hard to reconcile with them. Unfortunately there are plenty of (generally young male) people out there who are itching to take offence, and then take revenge (there are some horrific stories of people being smeared paedophiles etc. simply due to a simple onlin spat in a forum).

What I think is needed is some form of anonymous accountability, some way in which identities can be protected, but still held to account. One possible solution for this would be a registered anonymous identity, an online persona which would not be traceable back to one's real world name, but which would be a consistent persona on the internet. The idea would be that all comments and online activity would be made under the same psuedonym, allowing the history of what that person did to be seen, and judged. Of course this would not prevent malicious behaviour, since the person would still be untouchable, but it would encourage consistency, and reduce one-off attacks.

The main problem would be how to encourage people to actually maintain the same online identity, and not just switch and change, but one mechanism for this would be to have some central site which allocates these names, and restricts how many/often it does so, and for it to become appealing to have this particular name. For example an online newspaper might allow anonymous commenting, but only with registered names, and ensure one per use, with (appealable) banning for misuse. And of course the ideal version of this would be one (or a few) well respected social network sites issuing names, which would then be accepted by various other organizations (e.g. if a site facelessbook issued reliable usernames, then all newspapers might allow anonymous commenting under its login name).

The point is if a person's online history is available, then not only will they be induced to be consistent, and maybe think more about what they're doing, but also it will allow others to judge them by their previous actions. If someone only makes miserable reviews or indulges in trolling on various websites, then one would know to discount their comments. Similarly if someone DOESN'T use this mechanism, then maybe they've something to hide, or are at least not prepared to stand by what they said, so why listen to them.

There could even be things like an online 'reputation' credit rating service, whereby if you wanted to check up on a particular user's history, a web site would search the internet for all postings (all travel review sites, all newspapers) etc. to show what kind of person they were. It is important to note that this would not compromise privacy, people would 'opt in' by using the same name, but it could be setup in a way that most normal people would want to. And of course there ultimate identity would be secure (although maybe some protection would be needed to prevent indirect identification, e.g. google might well be able to link real named gmail accounts with IPs posting under particular pseudonyms).

My basic point is this phenomenon of online commentary is here to stay, and we as the technical generation need to work out new mechanisms to preserve the potential and avoid the pitfalls.

Thursday, July 7, 2011

first contact

One would hardly think it, but seems indeed still a few ‘uncontacted’ communities in the world, with a previously unknown tribe in the Amazon jungle having been spotted from the air recently.

The first reaction I think is, cool, good for them, let’s leave them alone – but is that really justified? Given the history of exploitation of indigenous peoples, the taking of their land and destruction of their way of life, then it would seem to be. But, I think there is also an element of the myth of the ‘noble savage’ involved, that somehow this simple, primitive life is worthy, and should be preserved. But for whose sake is it really then being preserved? For the people involved, who must live it out in its harsh reality, or for us who like the idea of some preserved speciments of the life we long lefy behind. They may be spared the hedonism of the modern world, but it’s a valid question as to whether the inevitable Hobbesianisms of primitive life don’t outweigh that…the transience of modern desires versus a transience of existence when life is poor, nasty, brutish and short.

Of course maybe their society is above average as undeveloped communities go, maybe it isn’t racked by the famine and conflict that our pre-history most certainly was – a sparser population with more natural resources may indeed facilitate a more bucolic lifestyle. But for sure they will lack modern medicines, modern technology, and modern education. Looked at as individuals, who must suffer any injury without anaesthetic, who must perhaps stand helplessly by as a child dies of some preventable disease, things are a lot less clear. Even beyond sheer physical well being, who of us would rather live a permanently closed life, knowing nothing of the wonders of the wider world, barred from ever accessing the means and technology to broaden our minds, to read, write, and even surf the internet. Or even if we would rather, who would be righteous and selfconfident enough to impose that life on someone else, not some abstract person but someone we know and care about, like a child. Surely our philosophy is that for better or worse everyone has a right to live life to the fullest, even with it’s inherent downsides.

And of course, as mentioned before, the track record on ‘first contacts’ is not something we can be proud of. But it is also probably true to say that this is more due to the people doing the first contact, rather than it being necessarily detrimental. Surely some form of gradual contact, slowly opening the outside world to these people, and importantly, their future generations, could be successfully done? And since eventually some contact is inevitable, better it’s on the right terms, than from some random loggers or miners who stumble across them?

Apart from our  rosy notions of primitive man, I think there is also at work the moral disjunct between sins of action, and sins of omission. Moral judgement is always harshest on actions we specifically choose to take, rather than those we simply fail to. To push someone into the path of a car is viewed as more morally reprehensible than to fail to push them out of it. In both cases the result is the same, the person might die because of our choices, but in one we are murderers, in the other just callous. I must explore this further some time, but I guess at root there are an infinite number of actions we can fail to take, so morality could not function if we could be blamed for them all, and so intuitively we view such omissions as less blameworthy. But at the society level, where we debate what is best to do, then intuitions must be tempered with rationality, and in this case leaving them alone is just as much an action as contacting them, and therefore should be weighed in terms of advantages and disadvantages, and importantly from the perspective of their physical and mental well being, not just from that of our emotional one.

Tuesday, March 8, 2011

Blink, or you'll miss it

Another interesting book from Gladwell, and as usual contains a feast of fascinating psychological insights. However, in this particular case, there is a dark side to these cognitive titbits,and while he does qualify his recommendations in light of this, there are questions raised as well as answered. This food for thought requires some chewing...

The power of blink


The focus of the book is what I am could be labelled 'blink thinking' - the rapid cognition which 'we' do automatically, but also unconciously. The stuff that happens in a blink of an eye, but fittingly which we also don't see or even realise.

The classic example of this is what is known as 'thin slicing' - extracting a wealth of information and judgment from a very thin segment of experience. That this can be a surprisingly powerful technique is clear from the many examples Gladwell provides. For instance there is the the marriage counseller John Gottmann, who can predict, after just an hour of conversation, with 95% accuracy whether a married couple will be still together in 15 years; or there is the fact that people who rate a professor's teaching ability based on just a few seconds of video clip reach similar conclusions to those of students who experienced an entire semester of classes.

It seems that in many such cases, there is a behavioural 'signature' which is always present, carries great significance, and which we are remarkably adept at (subconsciously) picking up on, even with minimal exposure to it. This is of course impressive, but does make sense, in that pattern recognition is a major feature of our mental apparatus and we are constantly finding out new ways in which it excels. While we might like to think the prowess of the human mind is in rationality and logical ability, as the field of Artificial Intelligence shows, the real challenge in replicating human behaviour lies not in implementing such calculations, but in recreating our power to identify and process patterns, even in such a seemingly 'simple' area as vision. Furthermore, as social creatures, it also makes sense that we have specialized pattern identification powers in this area as well. However Gladwell also shows how such social observations are not just interesting in their own right, but may lead to seemingly unconnected consequences. For example a study showed that the chances of a doctor being sued could be more reliably estimated from 'thin slicing' conversations with patients than by analyzing how many actual mistakes he/she made. It turns out a doctor who used a patronizing or dominant tone in conversatio would be more likely to be sued than one who actually made more medical mistakes. Again, in retrospect this can be seen to make sense, since suing is in a way a legal expression of what is at root a moral reaction - the feeling of being wronged and apportioning blame, so it is understandable that the patient's personal relationship with the doctor plays a major role; however, it does highlight how our social and emotional subconscious reactions may underlie what might be assumed to be rational areas of action. As a side note it is also perhaps comforting that the statistics bear this emotional influence out, since it shows the majority of lawsuits are then actually based on real feelings of indignation, not just an opportunistic chance to make a quick buck, which would be the case if people sued whenever the doctor made a mistake.


Manpulating Blink, and manipulating us


The story starts to get into darker territory though, when Gladwell shows us how this powerful subconcious mechanism can be influenced, and also (more disturbingly) how it can then influence our conscious, 'chosen', behaviour.

There are of course many examples these days of 'priming', whereby situational elements can be used to tilt the balance between one unconcious interpretation and another. What is unnerving about the example Gladwell gives though is how the effects can be seen to percolate into other areas which we would like to think are sacrosanct. So, for example, having people think about either being a professor or being a hooligan had a marked impact on their ability to answer general knowledge questions. One can literally put oneself into a 'smart' frame of mind by imagining being a smart person - or not, as the case may be. Much as we like to think of ourselves as our own person, this raises the issue of how identifying with someone else can change such subtle elements of our character. While the study seemed to focus on the positive effects of imagining a professor, one can assume there would be similar negative effects brought to the fore when taking on the role of a hooligan. Such 'social' mental states affecting us social creatures is one thing, but,perhaps more surprisingly, there are also examples of seemingly trivial physical factors playing a detectable, mental role. So for example in one study subjects found a cartoon more or less funny, depending on whether they held a pen between their lips, or between their teeth. This particular result is linked to the feedback-loop that is now being thought to exist between our emotions and our physical manifestations of them - in this case smiling (which is inhibited by a pen between the lips, but forced by a pen between the teeth).

We like to think such things are personal mental qualities and experiences, not something which can be manipulated by something as trivial as biting on a pen, but more and more research shows how misconceived this might be. This is a fascinating area, and I've read recently quite a few articles on the subject - for example in this sciam article frowning one's brow while working on a problem makes one think in an analytic manner, whereas puffing out one's cheeks has the opposite effect. While to make things even more bizarre, this article shows how daydreaming about love puts one in a creative frame of mind, whereas thinking about sex makes one more analytical!

Normally priming is I think associated with loading a judgment balance, pre-placing facts into our mind which then tip the scales faster in one direction than another, but it is I think a different, darker, ballgame if it influences not just 'what' goes into our decision, but 'how' that decision is made - how creatively or analytically, how smartly or how humourously.

Unknown unknowns


This is all quite surprising and disturbing, because by and large our subconscious processes are not just impenetrable to us, but we don't even know what we don't know. As often is the case absence of evidence is I think assumed to be evidence of absence : we don't feel our decisions to be moulded in this way, so assume they are not. This then is particularly deterimental when combined with our need to have a consistent narrative for our actions, to rationalize them, even after the event. When it comes to explaining our behaviour, our nature abhors a vacuum, so when the governing factors occur under our radar, we invent irrelevant reasons and justifications. We then not only confuse ourselves, but when confronted with this confusion we don't try to correct it, rather to bolster it.

An example Gladwell gives is asking both normal people and expert food tasters to rate jams. If just asked to give gut feelings, both groups produce roughly similar results. But if the non-experts are also asked to explain their decisions, then their preferences change and get all mixed up relative to the experts and even inconsistent relative to themselves. Trying to figure out why we like something ends up disturbing the what we think we like as well! Releated is the example of how, if we are shown a face, and later have to pick it out of a line up, we normally do quite well; however, if in between we are asked to describe that face, then we become much worse at actually identifying it later. Apart from being a useful tip on how to confuse a witness if ever accused of a crime, this raises an important issue : consideration can confuse! What seems actually to be happening is that if asked to explain/describe our choices/experiences, then we replace the automatic reaction with conciously (re-)constructed information, and in these sort of domains, that might not be appropriate, and hence something gets lost. The cliche of a picture painting a thousand words might apply here, since we are trying to convert a complex pattern into conscious 'words', and fail miserably.

If this was just about recognizing faces and choosing jams, then this wouldn't much cause for concern. However our instinctive social evaluations are also being shown to play a role in more serious areas, like prejudice. The most unnerving example of this is the Implicit Association Test, or the IAT. I think its instructive for everyone to try out this test, especially the racial version, and a good example is available online at https://implicit.harvard.edu/implicit. This test measures how fast we are at pairing two items, for example in the race test between a black/white face and a postive/negative word. The premise of this test is that we pair things faster when we already associate them together, and any change in response time indicates a conflict between rational choice and implicit association. So for example if someone takes longer to pair a black face with a positive term, then this is claimed to reveal implicit negative associations with black people.
I remember when I tried that test a while back, I was shocked to see I had a variation in response time, but thankfully (for my pride, but alas not for society) I am not alone in this (and hence not a closet racist!) since even Gladwell himself wasn't as implicitly fair minded as he thought he would be.

What this shows is regardless of our conscious attitudes to things like racial equality, we also have unconscious attitudes which, while not playing the major role, colour our initial impressions and interpretations, and these first reactions can be corrosive.
This chillingly reveals how prone we can be to stereotyping at a deep level, and this can of course be detrimental not just via unfair negative biases, but through misplaced positive ones as well. Gladwell gives an example of the latter in the 'Warren Harding Error' whereby in one election, a lot of US voters seem basically to have wrongly assumed a distinguished and intelligent looking candidate would make a distinguished and intelligent president. A similar bias probably explains why business leaders tend to be significantly taller than average, since we seem to automatically associate physical stature with leadership ability. We might dismiss such biases consciously, but they lurk on under the surface.

Conscious counter actions


But there's some light at the end of our tunnel vision. As Gladwell points out "just because something is outside of awareness doesn't mean it's outside of control". While it's true that we can't control the first impressions which are shaped by our experiences and environment, we can however change those external factors (change some of our experiences etc.) so as to foster and reinforce the more preferrable impressions.
Taking the Race IAT again and again doesn't change our performance, no matter how much we consciously will ourselves to be 'fair'. What does change our performance though, is thinking beforehand of examples of black people we know of and respect - for example famous figures like Nelson Mandella or Martin Luther King. This is of course basically priming, which might seem somehow manipulative, and to some extent it is, but only in so far as it is needed to counteract the pre-existing priming effects of our society being awash with negative stereotyping. If we remind ourselves of how the stereotypes don't hold true, then we prevent ourselves from falling back on them as automatic default assumptions.

The change in IAT results shows how this can be effective, and the changes can also be shown at a biological level. I unfortunately can't find the study anymore, but I remember reading of a brain scan experiment which showed how the amygdala (very much involved in fear and threat perception) would automatically light up when a subject was shown a foreign face. However the study seemed also to suggest that priming the subject to think of the person as an individual (this is a postman, father etc.), and I guess somehow identify with them, reduced this activation. So there is definitely some hope and comfort to be taken from this all. We may not be able to change our hard wired circuits to react to an 'other' (which can be configured by ignorance or by culture) but we can de-tune the inputs to those circuits to prevent them firing.

Sadly, this ability to prime our reactions also shows the lie in the cliche "sticks and stones may break my bones, but names will never hurt me". Name calling, stereotypes and even 'mere' jokes, all help tune these circuits in the wrong way. An example of the tangible and serious consequences of this is seen in a study by Albert Bandura in 'which students delivered much higher electric shocks to another group of participants merely if they had overheard someone saying that those students from the other college seemed like "animals" ' (see http://www.psychologicalscience.org/observer/getArticle.cfm?id=2032). Names may not break bones, but they break moral barriers, which opens the door to evil acts. Similarly this is why we should never tolerate villifying or objectifying rhetoric from politicians and others, because even 'free speech' is never free from effect. Despite how it reinforces and inflames those who are that way inclined anyway, it also sets the tone of debate and has a priming effect even on those who think they can just rationally discard it.

So as Gladwell points out "to really be a fair minded person, it is not enough to rationally act that way, but we also need to develop the related automatic mindset, by exposure to counter-examples to stereotypes". Put bluntly, we can't trust ourselves to act as we'd like to, unless we train for it.

And of course there are other types of 'priming that we need to be careful of. Gladwell discusses how simply being overstimulated can lead us to act in blinkered, blink-dominated ways. As a real life example of preventive measures against it, he points out how many police forces have now banned high speed chases. Not just due to the danger of 'collateral damage' during the chase, but mainly to avoid the over/misguided reactions of officers that can often arise after such exhileration (many riots over police brutality were actually triggered by events following a car chase).

Accept blink yes, but...


Overall, I think Gladwell makes a persuasive and interesting case for the power of 'blink'. His examples are illuminating, and he delivers an important lesson: there is more to our actions and even choices than we think, and we ignore this at our peril. Furthermore it's an especially important point that outside of awareness does not mean outside of control. It's one thing to make the spirit willing but we also can, and must, train the flesh not to be weak.

What makes me feel uncomfortable though is he seems to me a little over-enthusiastic about this unconscious power. He gives several examples of how blink thinking is better than trying to rationally decide, trying to list pros and cons etc., which can lead to what he terms 'paralysis through analysis'. While flashes of insight are of course always necessary, and I agree we should always try to leave some room for it in our deliberations, I also think that he sometimes risks encouraging too much reliance on 'intuition'. So, for example, in a story about Cook Country ER, he explains how doctors were taking too much information into account when trying to decide which people complaining of chest pains might really be on the verge of having a heart attack, and should be allocated scarce hospital beds, and which should be sent home. It turns out that there were just a few simple variables which were most predictive, and these weren't being spotted in the wealth of analysis undertaken. The implication is: too much needless thought.

But in such a situation could one really ever recommend the approach of not taking everything into consideration? The whole point is no one really knew what mattered, so they tried to factor in everything that might. It turns out this wasn't necessary, but only after a lengthy computer analysis of statistics. Human blink thinking wouldn't have ever reached this conclusion, and it would in my view be wrong to trust it, even if we thought it might. Our powers of pattern recognition are amazing, but such mechanisms have no corrective/evaluative feedback - hunches may be right, but it's irresponsible to take serious action based on them, since if we can't identify the grounds for our conclusions, we can't judge how shaky the foundations are. Being hard wired to see patterns means we might also see them where they aren't. Burying ourselves in data might stop us seeing the wood for the trees, but it also stops us seeing the crop circles.

What is true however is that, due to our inherent confirmation bias, more information might raise our confidence, but not our accuracy. Gladwell gives an example of psychologists reviewing a case history becoming more and more convinced of their (wrong) diagnoses the more they learned about the subject. This is something we for sure need to be aware of, but it would be ludicrous to reject more information just because of this risk!

And I think this argument is relevant in the other examples he gives - the successful 'spontaneous' wargame strategy, the higher success rate in problem solving tests if subjects didn't try to explain their ideas and so on. In such cases there are for sure times when blink thinking might come out on top, but the point is if we can't be sure it will, the we have to take the less successful but more responsible conscious path. This wisdom is embedded in our culture - don't act without thinking, don't be impetuous, look before you leap etc. And this is I think the reason why we can be shown so much about blink, because we grow up learning to overcome it, and as things like the race IAT show, for good reason. It's also not just about being pragmatic - there is a sense that we only truly 'own' an action if we consciously decide to take it. We would be much more scathing of someone who thought through an evil action in detail, than of someone who acted in a moment of madness.

Gladwell does actually touch on this notion of ownership, and who we feel ourselves to be, when he points out that a round package actually will make us rate ice cream as tasting better than it would in a square package. He correctly points out that it's hard to really justify why we would consider this as less relevant to the 'taste', to the experience of the ice cream, than, say, bigger chocolate chips. There is indeed a hard truth that we aren't the simple rational beings we think we are, and while this is an area worthy of much more discussion, I think it still doesn't take away from the value of trying to be such beings. Rational deliberation might not be the approach that always produces the best action but will consistently produce the most defensible ones. So while recognizing the need to factor in our gut reactions, I would be very wary of over-reliance, even if this reduces the advantage that can sometimes be gained from them.

Finally, while it probably isn't a major factor, it does I think merit saying that the very fact that this powerful mechanism is unconscious means we need to be careful in saying what really arose from it alone. So, in the opening example about the statue that 'didn't look right', all the experts said they felt misgivings from the start - from their first 'gut reactions'. But this is them recalling these 'gut' events later, and surely then there is at least some risk that the very rationalizing impulse Gladwell criticizes elsewhere, was also playing a role here?

Daniel Dennett and others have discussed conscious awareness as being something that might be retrospectively applied, and involve 'filling in' as the current situation requires. So for example we might suddenly tune in to a conversation when our name is heard, and be aware of the the entire preceding sentance, but in fact what was said before our name popped up would have not been in our consciousness otherwise. I.e. It seems to have been almost pulled into our consciousness afterwards, and only once we realized it was about us . I suspect there is room for something similar happening with these recalled first impressions : there might have been various subconcious ideas bubbling away, which were only later, perhaps with the overlay and product of some conscious thinking, then woven into the final narrative. I'm not trying to discount the fact that these experts probably did have gut pattern recognition which picked up on something being wrong, but to be consistent we also need to allow for the possibiliy of some element of 'storytelling' being present here as well.

So, overall another fascinating book, but one I think which requires some caveats and qualifications before taking fully on board. Like it or not, we need to acknowledge and appreciate blink thinking in our lives. However we need to harnass and tame it, not let it carry us away.

Links to related Scientific American articles
Does falling in love make us more creative?
How fantasies effect focus
Rapid thinking makes people happy
Of two minds when making a decision?




Wednesday, February 9, 2011

surveys and data gathering

Every now and then I get a call to my mobile, asking me to take part in some survey or other. There's something intrusive about cold calling, so I invariably tell them I'm don't have time, and if they're too slow or persistant to realise i'm just trying to be polite (would you have time tomorrow?) then of course I rachet up my refusal!

However, when I think about it, as someone who loves reading the results of the latest pscychological or societal study, which often rely on mass surveys, this isn't reasonable behaviour on my part. After all, if everyone had my attitude, and there would be no one to answer the needed questions. So perhaps I should view it as one of those necessary tasks we all should undertake to help society at large - things which sometimes seem to be not just an inconvenience but an irrelevance to us, and yet which may have real value to the community as a whole.

Of course, another mental excuse which often comes into play is the idea that it is probably anyway commercial, and why should I help some marketeer in his targetting of me and my disposable cash. But this also doesn't make a lot of sense, since in essence what I'm saying is, how dare they try to figure out what I might want! Surely it is in general a good thing not only that the products we want are available, but (to reduce waste and energy consumption) those we don't want, aren't!
I suppose the source of hostility arises from an association of such practices with advertising, but this can't really be the case. Advertising is about creating a want without a need, but this doesn't work if you do the asking first!

This is I think related to the more general practice of questionaires and data gathering. While normally this is again just at the level of inconvenience, there are also times when it involves issues of privacy.

For example in Kindergarten they now I think do do some questioning/assessment of the kids and their behaviour and abilities. The problem which some people then see is, such data sets can be used to build up a profile/history of a child over its school life. And the problem with that is, maybe something from a child's past will influence how it's later actions/behaviour are viewed. If a child had learning/behavioural problems at an early age, then this might colour later interpretations. In an extreme view there might be the idea that a child is pre-judged based on supposed 'early indicators'. I.e. because he/her exhibited such and such in pre-school, then he/she *must* have such and such a disposition or disability.

However, being honest, I think the potential for good is much more than the potential for harm. Even if such a link between early traits and later behaviours was possible, it could never be a conclusive one. So its use in a negative manner is limited. However, the link only needs to be probable for it to justify 'pre-emptive' actions at an early age - which even if over-applied based on such data analysis, would mean it would reach the kids who really need it, and would be very unlikely harm the one's who didn't (who would be harmed by extra tuition etc.?).
Of course if this was used as a basis of 'streaming' kids at too early an age then this would be another matter - but again the point is the scientific evidence wouldn't be strong enough to make a conclusive judgement, which would be required to impose such measures. But for supportive extra measures no one is going to require it to be shown 'beyond reasonable doubt'.

The point is such data gathering really only has 'forward-looking' uses - what things will be sold, what behaviours might appear in the future etc. It is statistical prediction, and as such only has proper validity, and use, at the group level. At the individual, specific level, what someone will really do or why they really did, can't be extrapolated from that

So I guess it's a case of don't ask what your society might want to do to you, but what you can do for your society, when it asks you.

Thursday, January 20, 2011

The psychology of charity

came across three interesting articles recently regarding what factors influence charitable donation.

The most unsurprising finding was that knowing someone with a similar issue would make people more likely to donate to a related charity but it was interesting that this seemed to simply focus their sympathy on certain areas, not increase their sympathy overall.

What was more interesting was the evidence that even though highlighting the personal/individual nature of a victim it was only really effective if the intended audience could identify with that victim. Otherwise such an approach was actually less effective than a more general marketing campaign.

But most thought provoking of all was the evidence from another study that even in cases of clear disaster, our charitable instincts are also infused with a desire to attribute blame.

Dr Hanna Zagefka, from the Department of Psychology at Royal Holloway, explains: "In line with the 'Just World Belief' hypothesis, people have an inherent need to believe that the world is just, and so the suffering of innocents calls into question this just world belief. In order to protect it, people try to construe suffering as just whenever possible, and generally humanly caused events provide more opportunity for victim blame than naturally caused events. In addition, the research shows that victims of natural disasters are generally also perceived to make more of an effort to help themselves, and people like to 'reward' those who are proactive by donating to them."

This is interesting stuff, since it shows how even charity is not a simple case of wanting to do good - but like all moral behaviour infused with our own sense of identity and justice. Such attitudes are of course crucial to normal moral judgement of peoples actions, but while we would rationally think we don't apply them to obvious cases of misfortune such as natural disasters or impact of wars, we emotionally still do.

The significance of this is twofold. It means that even charitable individuals should always be ready to appraise their behaviour, and secondly that charities themselves need to put more thougth and focus into their campaigns.

I also wonder does it apply to other areas in society where 'how much we care' sometimes seems to be quite irrational. For example the difference in emphasis between say the rarity of a plane crash, and the constant carnage on our roads. It always amazes me that society is prepared to put the time and resources into something like airport security, but not for example bring in laws whereby cars won't start if seatbelts aren't on, or if the driver hasn't blown into an onboard breathalyzer. Such measures, though not cheap, would pale in cost and effort in comparison to some other 'high profile' safety measures we willingly enforce, and yet would probably result in many more lives saved.

Apart from the normal psychological ideas of 'availability' (if a plane crash happens anywhere in the world we see it on the news and thus subconcsiously think they happen often) and 'habituation' (we hear of road deaths so often we just tune out) I think there is also for these areas a background influence from this 'Just world' idea. There is something particularly helpless about an individual in a plane crash, so it would be unusual if we judged them to blame for it. In contrast, road accidents always involve action as well as luck, and so even when we don't know the details, we might assume there was still some 'fault' involved. Apart from how much we care about how these issues affect others I think such reasoning is then also involved in how we consider the risk to ourselves. Our chances of dieing in a car crash are much higher than a lot of other things, but we probably think it won't happen to us, because we think we would be in control. of course the reality of most car accidents is the opposite, but we don't normally appreciate that.

Again this has relevance for us at a personal level (how we evaluate risk, appropriate blame) but also at a societal one. Maybe news reports of car crashes, disease outbreaks etc. should also learn from these studies, and for example highlight the unpreventable and unlucky sides to the stories.
It can only help...

Friday, January 14, 2011

Whose space is it anyway

(this might belong more in my "random thoughts" page, but am posting it here since I think our online presence is now a real element of our society, and maybe even our psychology)

There's been a lot of talk recently about the demise of My Space, eclipsed as it now is by the behemoth that is Facebook. Initially I admit my attitude was more of a "good riddance to bad rubbish", since I never really liked the site, and although I did register, think abandoned it pretty much from the start. Having read some recent articles though, like this one , I've changed my mind somewhat, and would agree that its fall from grace is a loss for the web, but also says something about how it is evolving, and maybe not for the good.

The point made is that My Space pages were much more unique and personal than Facebook pages are. This meant a lot of them were pretty confusing, even awful, but I realise now that at least the setup encouraged an independent and distinct self-expression, which Facebook doesn't. Instead there's more of a box ticking approach, pidgeon-holing us all into pre- selected (and marketable) classes ( This is another relevant article ). And as Facebook becomes more and more the default everybody, this is something to ponder. It could be of the way the internet is starting to "solidify" into a few basic streams, losing its earlier swirling flexibility. Like in the way the use of app stores might replace general searching of the webb at large, this might be another example of people starting to "settle down" with some limited set technologies, and , since these technologies work for them, they look for and try out new things less, and their horizons slowly narrow. Whether this is really happening is debatable, but it is indeed possible, and worthth of investigation. It would make sense too, that as the majority come online, the web dynamic changes, since most people will be just joining by default at it were, and will just want to quickly join in with the main activities that are already ongoing, and that everybody else is doing. This would be in contrast to the earlier days of the web,when most of those involved would have had the awareness of, and zeal for, the openness and mutability of it all. The earlier generations would have been younger, or involved in the "new" economy or other areas that would have let them experience the web as a place of constant change and possibility, and in retrospect the idea of carving out "my space" in cyber space fitted well with that. Joining Facebook is now I think more about checking in to a"communal" space, and hence the orientation is different. Not that it isn't good in its own right - "community" is good, even online ones-but something is lost in the uniform homogeneity.

My Space was really more of a personal website or blog hoster, not a standardized notice board like Facebook. Ironically that was why I didn't like, but not because of what it tried to do, but because it didn't do it well enough. Ever since I've had decent constant internet access, since say,1997, I've tried to maintain my own web page, and at the time geocities or whatever suited me better. However, at least My Space tried to foster that general idea of each poison having their own "site", and I'm not aware of any major trend that replaces it in that respect.

Though, actually, there are plenty of free simple web hosting sites, (i use one myself!), so it is probably more to do with the prevailing will of the web, not the means.

I guess just like not everyone wants to write their own book, not everyone wants to create their own piece of space in the digital/social domain. What My Space represents though is the fact that, thanks to the internet, in both cases, everyone now can,and long may this continue. My Space may be dead, but long live our spaces!

Posted from phone via Blogaway (so excuse any typos!)