Tuesday, November 4, 2014

How to read between the lines

From 2014 Los Angeles County Measure Ballot P

SAFE NEIGHBORHOOD PARKS, GANG PREVENTION, YOUTH/ SENIOR RECREATION, BEACHES/ WILDLIFE PROTECTION MEASURE.
– To ensure continued funding from an expiring voter-approved measure for improving the safety of neighborhood parks and senior/youth recreation areas; assisting in gang prevention; protecting rivers, beaches, water sources; repairing, acquiring/preserving parks/natural areas; maintaining zoos, museums; providing youth job-training, shall Los Angeles County levy an annual $23/parcel special tax, requiring annual independent financial audits and all funds used locally? 
From the Los Angeles Times

Why does Proposition P apply a regressive, flat per-parcel tax, unlike Proposition A, which assessed its tax using a formula based mostly on a property's size? (That tax ranged from 3 cents to $10,000.) Why should so much of the burden for parks funding be transferred from wealthy landowners to average property owners? Why, if so many of Proposition A's projects were itemized in the ballot measure, does Proposition P not actually itemize anything? Why does it make sense to divide a huge chunk of the funds equally among the five supervisors, for them to spend as they see fit, instead of according to the county's greatest need?
This shows a decent way of reading into what seems to be a simple statement about a special tax. Questions need to be asked in order to 'read into' that line, such as 'what alternatives are there to a flat tax?', 'What are the potential types of parcels (large, small, expensive, cheap)?', exactly the types of questions raised by the Los Angeles Times Article.

Tuesday, October 28, 2014

Confident Idiot

Source

Last March, during the enormous South by Southwest music festival in Austin, Texas, the late-night talk show Jimmy Kimmel Live! sent a camera crew out into the streets to catch hipsters bluffing. “People who go to music festivals pride themselves on knowing who the next acts are,” Kimmel said to his studio audience, “even if they don’t actually know who the new acts are.” So the host had his crew ask festival-goers for their thoughts about bands that don’t exist.

“The big buzz on the street,” said one of Kimmel’s interviewers to a man wearing thick-framed glasses and a whimsical T-shirt, “is Contact Dermatitis. Do you think he has what it takes to really make it to the big time?”
“Absolutely,” came the dazed fan’s reply.

The prank was an installment of Kimmel’s recurring “Lie Witness News” feature, which involves asking pedestrians a variety of questions with false premises. In another episode, Kimmel’s crew asked people on Hollywood Boulevard whether they thought the 2014 film Godzilla was insensitive to survivors of the 1954 giant lizard attack on Tokyo; in a third, they asked whether Bill Clinton gets enough credit for ending the Korean War, and whether his appearance as a judge on America’s Got Talent would damage his legacy. “No,” said one woman to this last question. “It will make him even more popular.”

One can’t help but feel for the people who fall into Kimmel’s trap. Some appear willing to say just about anything on camera to hide their cluelessness about the subject at hand (which, of course, has the opposite effect). Others seem eager to please, not wanting to let the interviewer down by giving the most boringly appropriate response: I don’t know. But for some of these interviewees, the trap may be an even deeper one. The most confident-sounding respondents often seem to think they do have some clue—as if there is some fact, some memory, or some intuition that assures them their answer is reasonable.

At one point during South by Southwest, Kimmel’s crew approached a poised young woman with brown hair. “What have you heard about Tonya and the Hardings?” the interviewer asked. “Have you heard they’re kind of hard-hitting?” Failing to pick up on this verbal wink, the woman launched into an elaborate response about the fictitious band. “Yeah, a lot of men have been talking about them, saying they’re really impressed,” she replied. “They’re usually not fans of female groups, but they’re really making a statement.” From some mental gossamer, she was able to spin an authoritative review of Tonya and the Hardings incorporating certain detailed facts: that they’re real; that they’re female (never mind that, say, Marilyn Manson and Alice Cooper aren’t); and that they’re a tough, boundary-breaking group.

In many cases, incompetence does not leave people disoriented, perplexed, or cautious. Instead, the incompetent are often blessed with an inappropriate confidence, buoyed by something that feels to them like knowledge.

 

To be sure, Kimmel’s producers must cherry-pick the most laughable interviews to put the air. But late-night TV is not the only place where one can catch people extemporizing on topics they know nothing about. In the more solemn confines of a research lab at Cornell University, the psychologists Stav Atir, Emily Rosenzweig, and I carry out ongoing research that amounts to a carefully controlled, less flamboyant version of Jimmy Kimmel’s bit. In our work, we ask survey respondents if they are familiar with certain technical concepts from physics, biology, politics, and geography. A fair number claim familiarity with genuine terms like centripetal force and photon. But interestingly, they also claim some familiarity with concepts that are entirely made up, such as the plates of parallax, ultra-lipid, and cholarine. In one study, roughly 90 percent claimed some knowledge of at least one of the nine fictitious concepts we asked them about. In fact, the more well versed respondents considered themselves in a general topic, the more familiarity they claimed with the meaningless terms associated with it in the survey.
It’s odd to see people who claim political expertise assert their knowledge of both Susan Rice (the national security adviser to President Barack Obama) and Michael Merrington (a pleasant-sounding string of syllables). But it’s not that surprising. For more than 20 years, I have researched people’s understanding of their own expertise—formally known as the study of metacognition, the processes by which human beings evaluate and regulate their knowledge, reasoning, and learning—and the results have been consistently sobering, occasionally comical, and never dull.

The American author and aphorist William Feather once wrote that being educated means “being able to differentiate between what you know and what you don’t.” As it turns out, this simple ideal is extremely hard to achieve. Although what we know is often perceptible to us, even the broad outlines of what we don’t know are all too often completely invisible. To a great degree, we fail to recognize the frequency and scope of our ignorance.

In 1999, in the Journal of Personality and Social Psychology, my then graduate student Justin Kruger and I published a paper that documented how, in many areas of life, incompetent people do not recognize—scratch that, cannot recognize—just how incompetent they are, a phenomenon that has come to be known as the Dunning-Kruger effect. Logic itself almost demands this lack of self-insight: For poor performers to recognize their ineptitude would require them to possess the very expertise they lack. To know how skilled or unskilled you are at using the rules of grammar, for instance, you must have a good working knowledge of those rules, an impossibility among the incompetent. Poor performers—and we are all poor performers at some things—fail to see the flaws in their thinking or the answers they lack.

What’s curious is that, in many cases, incompetence does not leave people disoriented, perplexed, or cautious. Instead, the incompetent are often blessed with an inappropriate confidence, buoyed by something that feels to them like knowledge.

This isn’t just an armchair theory. A whole battery of studies conducted by myself and others have confirmed that people who don’t know much about a given set of cognitive, technical, or social skills tend to grossly overestimate their prowess and performance, whether it’s grammar, emotional intelligence, logical reasoning, firearm care and safety, debating, or financial knowledge. College students who hand in exams that will earn them Ds and Fs tend to think their efforts will be worthy of far higher grades; low-performing chess players, bridge players, and medical students, and elderly people applying for a renewed driver’s license, similarly overestimate their competence by a long shot.

Occasionally, one can even see this tendency at work in the broad movements of history. Among its many causes, the 2008 financial meltdown was precipitated by the collapse of an epic housing bubble stoked by the machinations of financiers and the ignorance of consumers. And recent research suggests that many Americans’ financial ignorance is of the inappropriately confident variety. In 2012, the National Financial Capability Study, conducted by the Financial Industry Regulatory Authority (with the U.S. Treasury), asked roughly 25,000 respondents to rate their own financial knowledge, and then went on to measure their actual financial literacy.

The roughly 800 respondents who said they had filed bankruptcy within the previous two years performed fairly dismally on the test—in the 37th percentile, on average. But they rated their overall financial knowledge more, not less, positively than other respondents did. The difference was slight, but it was beyond a statistical doubt: 23 percent of the recently bankrupted respondents gave themselves the highest possible self-rating; among the rest, only 13 percent did so. Why the self-confidence? Like Jimmy Kimmel’s victims, bankrupted respondents were particularly allergic to saying “I don’t know.” Pointedly, when getting a question wrong, they were 67 percent more likely to endorse a falsehood than their peers were. Thus, with a head full of “knowledge,” they considered their financial literacy to be just fine.

Because it’s so easy to judge the idiocy of others, it may be sorely tempting to think this doesn’t apply to you. But the problem of unrecognized ignorance is one that visits us all. And over the years, I’ve become convinced of one key, overarching fact about the ignorant mind. One should not think of it as uninformed. Rather, one should think of it as misinformed.

An ignorant mind is precisely not a spotless, empty vessel, but one that’s filled with the clutter of irrelevant or misleading life experiences, theories, facts, intuitions, strategies, algorithms, heuristics, metaphors, and hunches that regrettably have the look and feel of useful and accurate knowledge. This clutter is an unfortunate by-product of one of our greatest strengths as a species. We are unbridled pattern recognizers and profligate theorizers. Often, our theories are good enough to get us through the day, or at least to an age when we can procreate. But our genius for creative storytelling, combined with our inability to detect our own ignorance, can sometimes lead to situations that are embarrassing, unfortunate, or downright dangerous—especially in a technologically advanced, complex democratic society that occasionally invests mistaken popular beliefs with immense destructive power (See: crisis, financial; war, Iraq). As the humorist Josh Billings once put it, “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” (Ironically, one thing many people “know” about this quote is that it was first uttered by Mark Twain or Will Rogers—which just ain’t so.)
Because of the way we are built, and because of the way we learn from our environment, we are all engines of misbelief. And the better we understand how our wonderful yet kludge-ridden, Rube Goldberg engine works, the better we—as individuals and as a society—can harness it to navigate toward a more objective understanding of the truth.

BORN WRONG

 

Some of our deepest intuitions about the world go all the way back to our cradles. Before their second birthday, babies know that two solid objects cannot co-exist in the same space. They know that objects continue to exist when out of sight, and fall if left unsupported. They know that people can get up and move around as autonomous beings, but that the computer sitting on the desk cannot. But not all of our earliest intuitions are so sound.

Very young children also carry misbeliefs that they will harbor, to some degree, for the rest of their lives. Their thinking, for example, is marked by a strong tendency to falsely ascribe intentions, functions, and purposes to organisms. In a child’s mind, the most important biological aspect of a living thing is the role it plays in the realm of all life. Asked why tigers exist, children will emphasize that they were “made for being in a zoo.” Asked why trees produce oxygen, children say they do so to allow animals to breathe.

Any conventional biology or natural science education will attempt to curb this propensity for purpose-driven reasoning. But it never really leaves us. Adults with little formal education show a similar bias. And, when rushed, even professional scientists start making purpose-driven mistakes. The Boston University psychologist Deborah Kelemen and some colleagues demonstrated this in a study that involved asking 80 scientists—people with university jobs in geoscience, chemistry, and physics—to evaluate 100 different statements about “why things happen” in the natural world as true or false. Sprinkled among the explanations were false purpose-driven ones, such as “Moss forms around rocks in order to stop soil erosion” and “The Earth has an ozone layer in order to protect it from UV light.” Study participants were allowed either to work through the task at their own speed, or given only 3.2 seconds to respond to each item. Rushing the scientists caused them to double their endorsements of false purpose-driven explanations, from 15 to 29 percent.

This purpose-driven misconception wreaks particular havoc on attempts to teach one of the most important concepts in modern science: evolutionary theory. Even laypeople who endorse the theory often believe a false version of it. They ascribe a level of agency and organization to evolution that is just not there. If you ask many laypeople their understanding of why, say, cheetahs can run so fast, they will explain it’s because the cats surmised, almost as a group, that they could catch more prey if they could just run faster, and so they acquired the attribute and passed it along to their cubs. Evolution, in this view, is essentially a game of species-level strategy.

This idea of evolution misses the essential role played by individual differences and competition between members of a species in response to environmental pressures: Individual cheetahs who can run faster catch more prey, live longer, and reproduce more successfully; slower cheetahs lose out, and die out—leaving the species to drift toward becoming faster overall. Evolution is the result of random differences and natural selection, not agency or choice.

But belief in the “agency” model of evolution is hard to beat back. While educating people about evolution can indeed lead them from being uninformed to being well informed, in some stubborn instances it also moves them into the confidently misinformed category. In 2014, Tony Yates and Edmund Marek published a study that tracked the effect of high school biology classes on 536 Oklahoma high school students’ understanding of evolutionary theory. The students were rigorously quizzed on their knowledge of evolution before taking introductory biology, and then again just afterward. Not surprisingly, the students’ confidence in their knowledge of evolutionary theory shot up after instruction, and they endorsed a greater number of accurate statements. So far, so good.

The trouble is that the number of misconceptions the group endorsed also shot up. For example, instruction caused the percentage of students strongly agreeing with the true statement “Evolution cannot cause an organism’s traits to change during its lifetime” to rise from 17 to 20 percent—but it also caused those strongly disagreeing to rise from 16 to 19 percent. In response to the likewise true statement “Variation among individuals is important for evolution to occur,” exposure to instruction produced an increase in strong agreement from 11 to 22 percent, but strong disagreement also rose from nine to 12 percent. Tellingly, the only response that uniformly went down after instruction was “I don’t know.”

And it’s not just evolution that bedevils students. Again and again, research has found that conventional educational practices largely fail to eradicate a number of our cradle-born misbeliefs. Education fails to correct people who believe that vision is made possible only because the eye emits some energy or substance into the environment. It fails to correct common intuitions about the trajectory of falling objects. And it fails to disabuse students of the idea that light and heat act under the same laws as material substances. What education often does appear to do, however, is imbue us with confidence in the errors we retain.

MISAPPLIED RULES

 

Imagine that the illustration below represents a curved tube lying horizontally on a table:
confident-idiot-chart
In a study of intuitive physics in 2013, Elanor Williams, Justin Kruger, and I presented people with several variations on this curved-tube image and asked them to identify the trajectory a ball would take (marked A, B, or C in the illustration) after it had traveled through each. Some people got perfect scores, and seemed to know it, being quite confident in their answers. Some people did a bit less well—and, again, seemed to know it, as their confidence was much more muted.

But something curious started happening as we began to look at the people who did extremely badly on our little quiz. By now, you may be able to predict it: These people expressed more, not less, confidence in their performance. In fact, people who got none of the items right often expressed confidence that matched that of the top performers. Indeed, this study produced the most dramatic example of the Dunning-Kruger effect we had ever seen: When looking only at the confidence of people getting 100 percent versus zero percent right, it was often impossible to tell who was in which group.
(Photo: Gregg Segal)

(Photo: Gregg Segal)

Why? Because both groups “knew something.” They knew there was a rigorous, consistent rule that a person should follow to predict the balls’ trajectories. One group knew the right Newtonian principle: that the ball would continue in the direction it was going the instant it left the tube—Path B. Freed of the tube’s constraint, it would just go straight.

People who got every item wrong typically answered that the ball would follow Path A. Essentially, their rule was that the tube would impart some curving impetus to the trajectory of the ball, which it would continue to follow upon its exit. This answer is demonstrably incorrect—but a plurality of people endorse it.

These people are in good company. In 1500 A.D., Path A would have been the accepted answer among sophisticates with an interest in physics. Both Leonardo da Vinci and French philosopher Jean Buridan endorsed it. And it does make some sense. A theory of curved impetus would explain common, everyday puzzles, such as why wheels continue to rotate even after someone stops pushing the cart, or why the planets continue their tight and regular orbits around the sun. With those problems “explained,” it’s an easy step to transfer this explanation to other problems like those involving tubes.
What this study illustrates is another general way—in addition to our cradle-born errors—in which humans frequently generate misbeliefs: We import knowledge from appropriate settings into ones where it is inappropriate.

Here’s another example: According to Pauline Kim, a professor at Washington University Law School, people tend to make inferences about the law based on what they know about more informal social norms. This frequently leads them to misunderstand their rights—and in areas like employment law, to wildly overestimate them. In 1997, Kim presented roughly 300 residents of Buffalo, New York, with a series of morally abhorrent workplace scenarios—for example, an employee is fired for reporting that a co-worker has been stealing from the company—that were nonetheless legal under the state’s “at-will” employment regime. Eighty to 90 percent of the Buffalonians incorrectly identified each of these distasteful scenarios as illegal, revealing how little they understood about how much freedom employers actually enjoy to fire employees. (Why does this matter? Legal scholars had long defended “at-will” employment rules on the grounds that employees consent to them in droves without seeking better terms of employment. What Kim showed was that employees seldom understand what they’re consenting to.)

Doctors, too, are quite familiar with the problem of inappropriately transferred knowledge in their dealings with patients. Often, it’s not the medical condition itself that a physician needs to defeat as much as patient misconceptions that protect it. Elderly patients, for example, frequently refuse to follow a doctor’s advice to exercise to alleviate pain—one of the most effective strategies available—because the physical soreness and discomfort they feel when they exercise is something they associate with injury and deterioration. Research by the behavioral economist Sendhil Mullainathan has found that mothers in India often withhold water from infants with diarrhea because they mistakenly conceive of their children as leaky buckets—rather than as increasingly dehydrated creatures in desperate need of water.

MOTIVATED REASONING

 

Some of our most stubborn misbeliefs arise not from primitive childlike intuitions or careless category errors, but from the very values and philosophies that define who we are as individuals. Each of us possesses certain foundational beliefs—narratives about the self, ideas about the social order—that essentially cannot be violated: To contradict them would call into question our very self-worth. As such, these views demand fealty from other opinions. And any information that we glean from the world is amended, distorted, diminished, or forgotten in order to make sure that these sacrosanct beliefs remain whole and unharmed.

The way we traditionally conceive of ignorance—as an absence of knowledge—leads us to think of education as its natural antidote. But education can produce illusory confidence.

 

One very commonly held sacrosanct belief, for example, goes something like this: I am a capable, good, and caring person. Any information that contradicts this premise is liable to meet serious mental resistance. Political and ideological beliefs, too, often cross over into the realm of the sacrosanct. The anthropological theory of cultural cognition suggests that people everywhere tend to sort ideologically into cultural worldviews diverging along a couple of axes: They are either individualist (favoring autonomy, freedom, and self-reliance) or communitarian (giving more weight to benefits and costs borne by the entire community); and they are either hierarchist (favoring the distribution of social duties and resources along a fixed ranking of status) or egalitarian (dismissing the very idea of ranking people according to status). According to the theory of cultural cognition, humans process information in a way that not only reflects these organizing principles, but also reinforces them. These ideological anchor points can have a profound and wide-ranging impact on what people believe, and even on what they “know” to be true.

It is perhaps not so surprising to hear that facts, logic, and knowledge can be bent to accord with a person’s subjective worldview; after all, we accuse our political opponents of this kind of “motivated reasoning” all the time. But the extent of this bending can be remarkable. In ongoing work with the political scientist Peter Enns, my lab has found that a person’s politics can warp other sets of logical or factual beliefs so much that they come into direct contradiction with one another. In a survey of roughly 500 Americans conducted in late 2010, we found that over a quarter of liberals (but only six percent of conservatives) endorsed both the statement “President Obama’s policies have already created a strong revival in the economy” and “Statutes and regulations enacted by the previous Republican presidential administration have made a strong economic recovery impossible.” Both statements are pleasing to the liberal eye and honor a liberal ideology, but how can Obama have already created a strong recovery that Republican policies have rendered impossible? Among conservatives, 27 percent (relative to just 10 percent of liberals) agreed both that “President Obama’s rhetorical skills are elegant but are insufficient to influence major international issues” and that “President Obama has not done enough to use his rhetorical skills to effect regime change in Iraq.” But if Obama’s skills are insufficient, why should he be criticized for not using them to influence the Iraqi government?

Sacrosanct ideological commitments can also drive us to develop quick, intense opinions on topics we know virtually nothing about—topics that, on their face, have nothing to do with ideology. Consider the emerging field of nanotechnology. Nanotech, loosely defined, involves the fabrication of products at the atomic or molecular level that have applications in medicine, energy production, biomaterials, and electronics. Like pretty much any new technology, nanotech carries the promise of great benefit (antibacterial food containers!) and the risk of serious downsides (nano-surveillance technology!).

In 2006, Daniel Kahan, a professor at Yale Law School, performed a study together with some colleagues on public perceptions of nanotechnology. They found, as other surveys had before, that most people knew little to nothing about the field. They also found that ignorance didn’t stop people from opining about whether nanotechnology’s risks outweighed its benefits.

When Kahan surveyed uninformed respondents, their opinions were all over the map. But when he gave another group of respondents a very brief, meticulously balanced description of the promises and perils of nanotech, the remarkable gravitational pull of deeply held sacrosanct beliefs became apparent. With just two paragraphs of scant (though accurate) information to go on, people’s views of nanotechnology split markedly—and aligned with their overall worldviews. Hierarchics/individualists found themselves viewing nanotechnology more favorably. Egalitarians/collectivists took the opposite stance, insisting that nanotechnology has more potential for harm than good.

Why would this be so? Because of underlying beliefs. Hierarchists, who are favorably disposed to people in authority, may respect industry and scientific leaders who trumpet the unproven promise of nanotechnology. Egalitarians, on the other hand, may fear that the new technology could present an advantage that conveys to only a few people. And collectivists might worry that nanotechnology firms will pay insufficient heed to their industry’s effects on the environment and public health. Kahan’s conclusion: If two paragraphs of text are enough to send people on a glide path to polarization, simply giving members of the public more information probably won’t help them arrive at a shared, neutral understanding of the facts; it will just reinforce their biased views.

One might think that opinions about an esoteric technology would be hard to come by. Surely, to know whether nanotech is a boon to humankind or a step toward doomsday would require some sort of knowledge about materials science, engineering, industry structure, regulatory issues, organic chemistry, surface science, semiconductor physics, microfabrication, and molecular biology. Every day, however, people rely on the cognitive clutter in their minds—whether it’s an ideological reflex, a misapplied theory, or a cradle-born intuition—to answer technical, political, and social questions they have little or no direct expertise in. We are never all that far from Tonya and the Hardings.

SEEING THROUGH THE CLUTTER

 

Unfortunately for all of us, policies and decisions that are founded on ignorance have a strong tendency, sooner or later, to blow up in one’s face. So how can policymakers, teachers, and the rest of us cut through all the counterfeit knowledge—our own and our neighbors’—that stands in the way of our ability to make truly informed judgments?

The way we traditionally conceive of ignorance—as an absence of knowledge—leads us to think of education as its natural antidote. But education, even when done skillfully, can produce illusory confidence. Here’s a particularly frightful example: Driver’s education courses, particularly those aimed at handling emergency maneuvers, tend to increase, rather than decrease, accident rates. They do so because training people to handle, say, snow and ice leaves them with the lasting impression that they’re permanent experts on the subject. In fact, their skills usually erode rapidly after they leave the course. And so, months or even decades later, they have confidence but little leftover competence when their wheels begin to spin.

In cases like this, the most enlightened approach, as proposed by Swedish researcher Nils Petter Gregersen, may be to avoid teaching such skills at all. Instead of training drivers how to negotiate icy conditions, Gregersen suggests, perhaps classes should just convey their inherent danger—they should scare inexperienced students away from driving in winter conditions in the first place, and leave it at that.

But, of course, guarding people from their own ignorance by sheltering them from the risks of life is seldom an option. Actually getting people to part with their misbeliefs is a far trickier, far more important task. Luckily, a science is emerging, led by such scholars as Stephan Lewandowsky at the University of Bristol and Ullrich Ecker of the University of Western Australia, that could help.

In the classroom, some of best techniques for disarming misconceptions are essentially variations on the Socratic method. To eliminate the most common misbeliefs, the instructor can open a lesson with them—and then show students the explanatory gaps those misbeliefs leave yawning or the implausible conclusions they lead to. For example, an instructor might start a discussion of evolution by laying out the purpose-driven evolutionary fallacy, prompting the class to question it. (How do species just magically know what advantages they should develop to confer to their offspring? How do they manage to decide to work as a group?) Such an approach can make the correct theory more memorable when it’s unveiled, and can prompt general improvements in analytical skills.
confident-idiots-03

(Photo: Gregg Segal)

Then, of course, there is the problem of rampant misinformation in places that, unlike classrooms, are hard to control—like the Internet and news media. In these Wild West settings, it’s best not to repeat common misbeliefs at all. Telling people that Barack Obama is not a Muslim fails to change many people’s minds, because they frequently remember everything that was said—except for the crucial qualifier “not.” Rather, to successfully eradicate a misbelief requires not only removing the misbelief, but filling the void left behind (“Obama was baptized in 1988 as a member of the United Church of Christ”). If repeating the misbelief is absolutely necessary, researchers have found it helps to provide clear and repeated warnings that the misbelief is false. I repeat, false.

The most difficult misconceptions to dispel, of course, are those that reflect sacrosanct beliefs. And the truth is that often these notions can’t be changed. Calling a sacrosanct belief into question calls the entire self into question, and people will actively defend views they hold dear. This kind of threat to a core belief, however, can sometimes be alleviated by giving people the chance to shore up their identity elsewhere. Researchers have found that asking people to describe aspects of themselves that make them proud, or report on values they hold dear, can make any incoming threat seem, well, less threatening.

For example, in a study conducted by Geoffrey Cohen, David Sherman, and other colleagues, self-described American patriots were more receptive to the claims of a report critical of U.S. foreign policy if, beforehand, they wrote an essay about an important aspect of themselves, such as their creativity, sense of humor, or family, and explained why this aspect was particularly meaningful to them. In a second study, in which pro-choice college students negotiated over what federal abortion policy should look like, participants made more concessions to restrictions on abortion after writing similar self-affirmative essays.

Sometimes, too, researchers have found that sacrosanct beliefs themselves can be harnessed to persuade a subject to reconsider a set of facts with less prejudice. For example, conservatives tend not to endorse policies that preserve the environment as much as liberals do. But conservatives do care about issues that involve “purity” in thought, deed, and reality. Casting environmental protection as a chance to preserve the purity of the Earth causes conservatives to favor those policies much more, as research by Matthew Feinberg and Robb Willer of Stanford University suggests. In a similar vein, liberals can be persuaded to raise military spending if such a policy is linked to progressive values like fairness and equity beforehand—by, for instance, noting that the military offers recruits a way out of poverty, or that military promotion standards apply equally to all.

But here is the real challenge: How can we learn to recognize our own ignorance and misbeliefs? To begin with, imagine that you are part of a small group that needs to make a decision about some matter of importance. Behavioral scientists often recommend that small groups appoint someone to serve as a devil’s advocate—a person whose job is to question and criticize the group’s logic. While this approach can prolong group discussions, irritate the group, and be uncomfortable, the decisions that groups ultimately reach are usually more accurate and more solidly grounded than they otherwise would be.

For individuals, the trick is to be your own devil’s advocate: to think through how your favored conclusions might be misguided; to ask yourself how you might be wrong, or how things might turn out differently from what you expect. It helps to try practicing what the psychologist Charles Lord calls “considering the opposite.” To do this, I often imagine myself in a future in which I have turned out to be wrong in a decision, and then consider what the likeliest path was that led to my failure. And lastly: Seek advice. Other people may have their own misbeliefs, but a discussion can often be sufficient to rid a serious person of his or her most egregious misconceptions.

CIVICS FOR ENLIGHTENED DUMMIES

 

In an edition of “Lie Witness News” last January, Jimmy Kimmel’s cameras decamped to the streets of Los Angeles the day before President Barack Obama was scheduled to give his annual State of the Union address. Interviewees were asked about John Boehner’s nap during the speech and the moment at the end when Obama faked a heart attack. Reviews of the fictitious speech ranged from “awesome” to “powerful” to just “all right.” As usual, the producers had no trouble finding people who were willing to hold forth on events they couldn’t know anything about.

American comedians like Kimmel and Jay Leno have a long history of lampooning their countrymen’s ignorance, and American scolds have a long history of lamenting it. Every few years, for at least the past century, various groups of serious-minded citizens have conducted studies of civic literacy—asking members of the public about the nation’s history and governance—and held up the results as cause for grave concern over cultural decline and decay. In 1943, after a survey of 7,000 college freshmen found that only six percent could identify the original 13 colonies (with some believing that Abraham Lincoln, “our first president,” “emaciated the slaves”), the New York Times lamented the nation’s “appallingly ignorant” youth. In 2002, after a national test of fourth, eighth, and 12th graders produced similar results, the Weekly Standard pronounced America’s students “dumb as rocks.”

Because it’s so easy to judge the idiocy of others, it may be sorely tempting to think this doesn’t apply to you. But the problem of unrecognized ignorance is one that visits us all.

 

In 2008, the Intercollegiate Studies Institute surveyed 2,508 Americans and found that 20 percent of them think the electoral college “trains those aspiring for higher political office” or “was established to supervise the first televised presidential debates.” Alarms were again raised about the decline of civic literacy. Ironically, as Stanford historian Sam Wineburg has written, people who lament America’s worsening ignorance of its own history are themselves often blind to how many before them have made the exact same lament; a look back suggests not a falling off from some baseline of American greatness, but a fairly constant level of clumsiness with the facts.

The impulse to worry over all these flubbed answers does make a certain amount of sense given that the subject is civics. “The questions that stumped so many students,” lamented Secretary of Education Rod Paige after a 2001 test, “involve the most fundamental concepts of our democracy, our growth as a nation, and our role in the world.” One implicit, shame-faced question seems to be: What would the Founding Fathers think of these benighted descendants?

But I believe we already know what the Founding Fathers would think. As good citizens of the Enlightenment, they valued recognizing the limits of one’s knowledge at least as much as they valued retaining a bunch of facts. Thomas Jefferson, lamenting the quality of political journalism in his day, once observed that a person who avoided newspapers would be better informed than a daily reader, in that someone “who knows nothing is closer to the truth than he whose mind is filled with falsehoods and errors.” Benjamin Franklin wrote that “a learned blockhead is a greater blockhead than an ignorant one.” Another quote sometimes attributed to Franklin has it that “the doorstep to the temple of wisdom is a knowledge of our own ignorance.”

The built-in features of our brains, and the life experiences we accumulate, do in fact fill our heads with immense knowledge; what they do not confer is insight into the dimensions of our ignorance. As such, wisdom may not involve facts and formulas so much as the ability to recognize when a limit has been reached. Stumbling through all our cognitive clutter just to recognize a true “I don’t know” may not constitute failure as much as it does an enviable success, a crucial signpost that shows us we are traveling in the right direction toward the truth.

Monday, October 20, 2014

Isaac Asimov Mulls “How Do People Get New Ideas?”

Source 

ON CREATIVITY
How do people get new ideas?
Presumably, the process of creativity, whatever it is, is essentially the same in all its branches and varieties, so that the evolution of a new art form, a new gadget, a new scientific principle, all involve common factors. We are most interested in the “creation” of a new scientific principle or a new application of an old one, but we can be general here.
One way of investigating the problem is to consider the great ideas of the past and see just how they were generated. Unfortunately, the method of generation is never clear even to the “generators” themselves.
But what if the same earth-shaking idea occurred to two men, simultaneously and independently? Perhaps, the common factors involved would be illuminating. Consider the theory of evolution by natural selection, independently created by Charles Darwin and Alfred Wallace.
There is a great deal in common there. Both traveled to far places, observing strange species of plants and animals and the manner in which they varied from place to place. Both were keenly interested in finding an explanation for this, and both failed until each happened to read Malthus’s “Essay on Population.”
Both then saw how the notion of overpopulation and weeding out (which Malthus had applied to human beings) would fit into the doctrine of evolution by natural selection (if applied to species generally).
Obviously, then, what is needed is not only people with a good background in a particular field, but also people capable of making a connection between item 1 and item 2 which might not ordinarily seem connected.
Undoubtedly in the first half of the 19th century, a great many naturalists had studied the manner in which species were differentiated among themselves. A great many people had read Malthus. Perhaps some both studied species and read Malthus. But what you needed was someone who studied species, read Malthus, and had the ability to make a cross-connection.
That is the crucial point that is the rare characteristic that must be found. Once the cross-connection is made, it becomes obvious. Thomas H. Huxley is supposed to have explained after reading On the Origin of Species, “How stupid of me not to have thought of this.”
But why didn’t he think of it? The history of human thought would make it seem that there is difficulty in thinking of an idea even when all the facts are on the table. Making the cross-connection requires a certain daring. It must, for any cross-connection that does not require daring is performed at once by many and develops not as a “new idea,” but as a mere “corollary of an old idea.”
It is only afterward that a new idea seems reasonable. To begin with, it usually seems unreasonable. It seems the height of unreason to suppose the earth was round instead of flat, or that it moved instead of the sun, or that objects required a force to stop them when in motion, instead of a force to keep them moving, and so on.
A person willing to fly in the face of reason, authority, and common sense must be a person of considerable self-assurance. Since he occurs only rarely, he must seem eccentric (in at least that respect) to the rest of us. A person eccentric in one respect is often eccentric in others.
Consequently, the person who is most likely to get new ideas is a person of good background in the field of interest and one who is unconventional in his habits. (To be a crackpot is not, however, enough in itself.)
Once you have the people you want, the next question is: Do you want to bring them together so that they may discuss the problem mutually, or should you inform each of the problem and allow them to work in isolation?
My feeling is that as far as creativity is concerned, isolation is required. The creative person is, in any case, continually working at it. His mind is shuffling his information at all times, even when he is not conscious of it. (The famous example of Kekule working out the structure of benzene in his sleep is well-known.)
The presence of others can only inhibit this process, since creation is embarrassing. For every new good idea you have, there are a hundred, ten thousand foolish ones, which you naturally do not care to display.
Nevertheless, a meeting of such people may be desirable for reasons other than the act of creation itself.
No two people exactly duplicate each other’s mental stores of items. One person may know A and not B, another may know B and not A, and either knowing A and B, both may get the idea—though not necessarily at once or even soon.
Furthermore, the information may not only be of individual items A and B, but even of combinations such as A-B, which in themselves are not significant. However, if one person mentions the unusual combination of A-B and another unusual combination A-C, it may well be that the combination A-B-C, which neither has thought of separately, may yield an answer.
It seems to me then that the purpose of cerebration sessions is not to think up new ideas but to educate the participants in facts and fact-combinations, in theories and vagrant thoughts.
But how to persuade creative people to do so? First and foremost, there must be ease, relaxation, and a general sense of permissiveness. The world in general disapproves of creativity, and to be creative in public is particularly bad. Even to speculate in public is rather worrisome. The individuals must, therefore, have the feeling that the others won’t object.
If a single individual present is unsympathetic to the foolishness that would be bound to go on at such a session, the others would freeze. The unsympathetic individual may be a gold mine of information, but the harm he does will more than compensate for that. It seems necessary to me, then, that all people at a session be willing to sound foolish and listen to others sound foolish.
If a single individual present has a much greater reputation than the others, or is more articulate, or has a distinctly more commanding personality, he may well take over the conference and reduce the rest to little more than passive obedience. The individual may himself be extremely useful, but he might as well be put to work solo, for he is neutralizing the rest.
The optimum number of the group would probably not be very high. I should guess that no more than five would be wanted. A larger group might have a larger total supply of information, but there would be the tension of waiting to speak, which can be very frustrating. It would probably be better to have a number of sessions at which the people attending would vary, rather than one session including them all. (This would involve a certain repetition, but even repetition is not in itself undesirable. It is not what people say at these conferences, but what they inspire in each other later on.)
For best purposes, there should be a feeling of informality. Joviality, the use of first names, joking, relaxed kidding are, I think, of the essence—not in themselves, but because they encourage a willingness to be involved in the folly of creativeness. For this purpose I think a meeting in someone’s home or over a dinner table at some restaurant is perhaps more useful than one in a conference room.
Probably more inhibiting than anything else is a feeling of responsibility. The great ideas of the ages have come from people who weren’t paid to have great ideas, but were paid to be teachers or patent clerks or petty officials, or were not paid at all. The great ideas came as side issues.
To feel guilty because one has not earned one’s salary because one has not had a great idea is the surest way, it seems to me, of making it certain that no great idea will come in the next time either.
Yet your company is conducting this cerebration program on government money. To think of congressmen or the general public hearing about scientists fooling around, boondoggling, telling dirty jokes, perhaps, at government expense, is to break into a cold sweat. In fact, the average scientist has enough public conscience not to want to feel he is doing this even if no one finds out.
I would suggest that members at a cerebration session be given sinecure tasks to do—short reports to write, or summaries of their conclusions, or brief answers to suggested problems—and be paid for that; the payment being the fee that would ordinarily be paid for the cerebration session. The cerebration session would then be officially unpaid-for and that, too, would allow considerable relaxation.
I do not think that cerebration sessions can be left unguided. There must be someone in charge who plays a role equivalent to that of a psychoanalyst. A psychoanalyst, as I understand it, by asking the right questions (and except for that interfering as little as possible), gets the patient himself to discuss his past life in such a way as to elicit new understanding of it in his own eyes.
In the same way, a session-arbiter will have to sit there, stirring up the animals, asking the shrewd question, making the necessary comment, bringing them gently back to the point. Since the arbiter will not know which question is shrewd, which comment necessary, and what the point is, his will not be an easy job.
As for “gadgets” designed to elicit creativity, I think these should arise out of the bull sessions themselves. If thoroughly relaxed, free of responsibility, discussing something of interest, and being by nature unconventional, the participants themselves will create devices to stimulate discussion.

Comparing speeds using different import methods in python

test_argv.py
from sys import argv                                                                              
i=0                                                                                               
while i < 10000000:                                                                               
    len(argv)                                                                                     
    i+=1

test_local.py
import sys                                                                                        
                                                                                                  
local = sys.argv                                                                                  
i=0                                                                                               
while i < 1E7:                                                                                    
    len(local)                                                                                    
    i+=1

test_sys.py
import sys                                                                                        
i=0                                                                                               
while i < 1E7:                                                                                    
    len(sys.argv)                                                                                 
    i+=1

time python test_argv.py; time python test_local.py; time python test_sys.py

real    0m1.081s
user    0m1.081s
sys    0m0.000s

real    0m1.260s
user    0m1.248s
sys    0m0.012s

real    0m1.420s
user    0m1.420s
sys    0m0.000s

Friday, September 5, 2014

Python Pitfalls

#1: Misusing expressions as defaults for function arguments

Python allows you to specify that a function argument is optional by providing a default value for it. While this is a great feature of the language, it can lead to some confusion when the default value is mutable. For example, consider this Python function definition:
>>> def foo(bar=[]):        # bar is optional and defaults to [] if not specified
...    bar.append("baz")    # but this line could be problematic, as we'll see...
...    return bar
A common mistake is to think that the optional argument will be set to the specified default expression each time the function is called without supplying a value for the optional argument. In the above code, for example, one might expect that calling foo() repeatedly (i.e., without specifying a bar argument) would always return 'baz', since the assumption would be that each time foo() is called (without a bar argument specified) bar is set to [] (i.e., a new empty list).
But let’s look at what actually happens when you do this:
>>> foo()
["baz"]
>>> foo()
["baz", "baz"]
>>> foo()
["baz", "baz", "baz"]
Huh? Why did it keep appending the default value of "baz" to an existing list each time foo() was called, rather than creating a new list each time?
The answer is that the default value for a function argument is only evaluated once, at the time that the function is defined. Thus, the bar argument is initialized to its default (i.e., an empty list) only when foo() is first defined, but then calls to foo() (i.e., without a bar argument specified) will continue to use the same list to which bar was originally initialized.
FYI, a common workaround for this is as follows:
>>> def foo(bar=None):
...    if bar is None:  # or if not bar:
...        bar = []
...    bar.append("baz")
...    return bar
...
>>> foo()
["baz"]
>>> foo()
["baz"]
>>> foo()
["baz"]

#2: Using class variables incorrectly

Consider the following example:
>>> class A(object):
...     x = 1
...
>>> class B(A):
...     pass
...
>>> class C(A):
...     pass
...
>>> print A.x, B.x, C.x
1 1 1
Makes sense.
>>> B.x = 2
>>> print A.x, B.x, C.x
1 2 1
Yup, again as expected.
>>> A.x = 3
>>> print A.x, B.x, C.x
3 2 3
What the $%#!&?? We only changed A.x. Why did C.x change too?
In Python, class variables are internally handled as dictionaries and follow what is often referred to as Method Resolution Order (MRO). So in the above code, since the attribute x is not found in class C, it will be looked up in its base classes (only A in the above example, although Python supports multiple inheritance). In other words, C doesn’t have its own x property, independent of A. Thus, references to C.x are in fact references to A.x.

#3: Specifying parameters incorrectly for an exception block

Suppose you have the following code:
>>> try:
...     l = ["a", "b"]
...     int(l[2])
... except ValueError, IndexError:  # To catch both exceptions, right?
...     pass
...
Traceback (most recent call last):
  File "", line 3, in 
IndexError: list index out of range
The problem here is that the except statement does not take a list of exceptions specified in this manner. Rather, In Python 2.x, the syntax except Exception, e is used to bind the exception to the optional second parameter specified (in this case e), in order to make it available for further inspection. As a result, in the above code, the IndexError exception is not being caught by the except statement; rather, the exception instead ends up being bound to a parameter named IndexError.
The proper way to catch multiple exceptions in an except statement is to specify the first parameter as a tuple containing all exceptions to be caught. Also, for maximum portability, use the as keyword, since that syntax is supported by both Python 2 and Python 3:
>>> try:
...     l = ["a", "b"]
...     int(l[2])
... except (ValueError, IndexError) as e:  
...     pass
...
>>>

#4: Misunderstanding Python scope rules

Python scope resolution is based on what is known as the LEGB rule, which is shorthand for Local, Enclosing, Global, Built-in. Seems straightforward enough, right? Well, actually, there are some subtleties to the way this works in Python. Consider the following:
>>> x = 10
>>> def foo():
...     x += 1
...     print x
...
>>> foo()
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 2, in foo
UnboundLocalError: local variable 'x' referenced before assignment
What’s the problem?
The above error occurs because, when you make an assignment to a variable in a scope, that variable is automatically considered by Python to be local to that scope and shadows any similarly named variable in any outer scope.
Many are thereby surprised to get an UnboundLocalError in previously working code when it is modified by adding an assignment statement somewhere in the body of a function. (You can read more about this here.)
It is particularly common for this to trip up developers when using lists. Consider the following example:
>>> lst = [1, 2, 3]
>>> def foo1():
...     lst.append(5)   # This works ok...
...
>>> foo1()
>>> lst
[1, 2, 3, 5]

>>> lst = [1, 2, 3]
>>> def foo2():
...     lst += [5]      # ... but this bombs!
...
>>> foo2()
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 2, in foo
UnboundLocalError: local variable 'lst' referenced before assignment
Huh? Why did foo2 bomb while foo1 ran fine?
The answer is the same as in the prior example, but is admittedly more subtle. foo1 is not making an assignment to lst, whereas foo2 is. Remembering that lst += [5] is really just shorthand for lst = lst + [5], we see that we are attempting to assign a value to lst (therefore presumed by Python to be in the local scope). However, the value we are looking to assign to lst is based on lst itself (again, now presumed to be in the local scope), which has not yet been defined. Boom.

#5: Modifying a list while iterating over it 

The problem with the following code should be fairly obvious:
>>> odd = lambda x : bool(x % 2)
>>> numbers = [n for n in range(10)]
>>> for i in range(len(numbers)):
...     if odd(numbers[i]):
...         del numbers[i]  # BAD: Deleting item from a list while iterating over it
...
Traceback (most recent call last):
     File "", line 2, in 
IndexError: list index out of range
Deleting an item from a list or array while iterating over it is a faux pas well known to any experienced software developer. But while the example above may be fairly obvious, even advanced developers can be unintentionally bitten by this in code that is much more complex.
Fortunately, Python incorporates a number of elegant programming paradigms which, when used properly, can result in significantly simplified and streamlined code. A side benefit of this is that simpler code is less likely to be bitten by the accidental-deletion-of-a-list-item-while-iterating-over-it bug. One such paradigm is that of list comprehensions. Moreover, list comprehensions are particularly useful for avoiding this specific problem, as shown by this alternate implementation of the above code which works perfectly:
>>> odd = lambda x : bool(x % 2)
>>> numbers = [n for n in range(10)]
>>> numbers[:] = [n for n in numbers if not odd(n)]  # ahh, the beauty of it all
>>> numbers
[0, 2, 4, 6, 8]

#6: Confusing how Python binds variables in closures

Considering the following example:
>>> def create_multipliers():
...     return [lambda x : i * x for i in range(5)]
>>> for multiplier in create_multipliers():
...     print multiplier(2)
...
You might expect the following output:
0
2
4
6
8
But you actually get:
8
8
8
8
8
Surprise!
This happens due to Python’s late binding behavior which says that the values of variables used in closures are looked up at the time the inner function is called. So in the above code, whenever any of the returned functions are called, the value of i is looked up in the surrounding scope at the time it is called (and by then, the loop has completed, so i has already been assigned its final value of 4).
The solution to this is a bit of a hack:
>>> def create_multipliers():
...     return [lambda x, i=i : i * x for i in range(5)]
...
>>> for multiplier in create_multipliers():
...     print multiplier(2)
...
0
2
4
6
8
Voilà! We are taking advantage of default arguments here to generate anonymous functions in order to achieve the desired behavior. Some would call this elegant. Some would call it subtle. Some hate it. But if you’re a Python developer, it’s important to understand in any case.

#7: Creating circular module dependencies

Let’s say you have two files, a.py and b.py, each of which imports the other, as follows:
In a.py:
import b

def f():
    return b.x
 
print f()
And in b.py:
import a

x = 1

def g():
    print a.f()
First, let’s try importing a.py:
>>> import a
1
Worked just fine. Perhaps that surprises you. After all, we do have a circular import here which presumably should be a problem, shouldn’t it?
The answer is that the mere presence of a circular import is not in and of itself a problem in Python. If a module has already been imported, Python is smart enough not to try to re-import it. However, depending on the point at which each module is attempting to access functions or variables defined in the other, you may indeed run into problems.
So returning to our example, when we imported a.py, it had no problem importing b.py, since b.py does not require anything from a.py to be defined at the time it is imported. The only reference in b.py to a is the call to a.f(). But that call is in g() and nothing in a.py or b.py invokes g(). So life is good.
But what happens if we attempt to import b.py (without having previously imported a.py, that is):
>>> import b
Traceback (most recent call last):
     File "", line 1, in <module>
     File "b.py", line 1, in <module>
    import a
     File "a.py", line 6, in <module>
 print f()
     File "a.py", line 4, in f
 return b.x
AttributeError: 'module' object has no attribute 'x'
Uh-oh. That’s not good! The problem here is that, in the process of importing b.py, it attempts to import a.py, which in turn calls f(), which attempts to access b.x. But b.x has not yet been defined. Hence the AttributeError exception.
At least one solution to this is quite trivial. Simply modify b.py to import a.py within g():
x = 1

def g():
    import a # This will be evaluated only when g() is called
    print a.f()
No when we import it, everything is fine:
>>> import b
>>> b.g()
1 # Printed a first time since module 'a' calls 'print f()' at the end
1 # Printed a second time, this one is our call to 'g'

#8: Name clashing with Python Standard Library modules

One of the beauties of Python is the wealth of library modules that it comes with “out of the box”. But as a result, if you’re not consciously avoiding it, it’s not that difficult to run into a name clash between the name of one of your modules and a module with the same name in the standard library that ships with Python (for example, you might have a module named email.py in your code, which would be in conflict with the standard library module of the same name).
This can lead to gnarly problems, such as importing another library which in turns tries to import the Python Standard Library version of a module but, since you have a module with the same name, the other package mistakenly imports your version instead of the one within the Python Standard Library. This is where bad stuff happens.
Care should therefore be exercised to avoid using the same names as those in the Python Standard Library modules. It’s way easier for you to change the name of a module within your package than it is to file a Python Enhancement Proposal (PEP) to request a name change upstream and to try and get that approved.

#9: Failing to address differences between Python 2 and Python 3

Consider the following file foo.py:
import sys

def bar(i):
    if i == 1:
        raise KeyError(1)
    if i == 2:
        raise ValueError(2)

def bad():
    e = None
    try:
        bar(int(sys.argv[1]))
    except KeyError as e:
        print('key error')
    except ValueError as e:
        print('value error')
    print(e)

bad()
On Python 2, this runs fine:
$ python foo.py 1
key error
1
$ python foo.py 2
value error
2
But now let’s give it a whirl on Python 3:
$ python3 foo.py 1
key error
Traceback (most recent call last):
  File "foo.py", line 19, in 
    bad()
  File "foo.py", line 17, in bad
    print(e)
UnboundLocalError: local variable 'e' referenced before assignment
What has just happened here? The “problem” is that, in Python 3, the exception object is not accessible beyond the scope of the except block. (The reason for this is that, otherwise, it would keep a reference cycle with the stack frame in memory until the garbage collector runs and purges the references from memory. More technical detail about this is available here).
One way to avoid this issue is to maintain a reference to the exception object outside the scope of the except block so that it remains accessible. Here’s a version of the previous example that uses this technique, thereby yielding code that is both Python 2 and Python 3 friendly:
import sys

def bar(i):
    if i == 1:
        raise KeyError(1)
    if i == 2:
        raise ValueError(2)

def good():
    exception = None
    try:
        bar(int(sys.argv[1]))
    except KeyError as e:
        exception = e
        print('key error')
    except ValueError as e:
        exception = e
        print('value error')
    print(exception)

good()
Running this on Py3k:
$ python3 foo.py 1
key error
1
$ python3 foo.py 2
value error
2
Yippee!

#10: Misusing the __del__ method

Let’s say you had this in a file called mod.py:
import foo

class Bar(object):
        ...
    def __del__(self):
        foo.cleanup(self.myhandle)
And you then tried to do this from another_mod.py:
import mod
mybar = mod.Bar()
You’d get an ugly AttributeError exception.
Why? Because, as reported here, when the interpreter shuts down, the module’s global variables are all set to None. As a result, in the above example, at the point that __del__ is invoked, the name foo has already been set to None.
A solution would be to use atexit.register() instead. That way, when your program is finished executing (when exiting normally, that is), your registered handlers are kicked off before the interpreter is shut down.
With that understanding, a fix for the above mod.py code might then look something like this:
import foo
import atexit

def cleanup(handle):
    foo.cleanup(handle)


class Bar(object):
    def __init__(self):
        ...
        atexit.register(cleanup, self.myhandle)
This implementation provides a clean and reliable way of calling any needed cleanup functionality upon normal program termination. Obviously, it’s up to foo.cleanup to decide what to do with the object bound to the name self.myhandle, but you get the idea. But while were at it..

#11: __del__ Can't be Trusted

The mere existence of this method makes objects that are part of a reference cycle uncollectable by Python's garbage collector and could lead to memory leaks.
Use a weakref.ref object with a callback to run code when an object is being removed instead.
See also Python gc module documentation

#12: Using os.system or os.popen instead of subprocess

Starting with the non-controversial: Anything that has been marked deprecated should be avoided. The deprecation warning should have instructions with safe alternatives you can use.
Some of the most frequent offenders are parts of the language that make it difficult to safely call other programs:
os.system()

os.popen()

import commands
We have the excellent subprocess module for these now, use it.

#13: Not using duck typing

Explicitly checking the type of a parameter passed to a function breaks the expected duck-typing convention of Python. Common type checking includes:
isinstance(x, X)

type(x) == X
With type() being the worse of the two.
If you must have different behaviour for different types of objects passed, try treating the object as the first data type you expect, and catching the failure if that type wasn't that type, and then try the second. This allows users to create objects that are close enough to the types you expect and still use your code.
See also isinstance() considered harmful.

#14: Using pickle to serialize data

import pickle # or cPickle
Objects serialized with pickle are tied to their implementations in the code at that time. Restoring an object after an underlying class has changed will lead to undefined behaviour. Unserializing pickled data from an untrusted source can lead to remote exploits. The pickled data itself is opaque binary that can't be easily edited or reviewed.
This leaves only one place where pickle makes sense -- short lived data being passed between processes, just like what the multiprocessing module does.
Anywhere else use a different format. Use a database or use JSON with a well-defined structure. Both are restricted to simple data types and are easily verified or updated outside of your Python script. See also Alex Gaynor's presentation on pickle.

#15: Misusing demonstration modules

Many people are drawn to these modules because they are part of Python's standard library. Some people even try to do serious work with them.
import asyncore

import asynchat

import SimpleHTTPServer
The former resembles a reasonable asynchronous library, until you find out there are no timers. At all. Use Twisted or Tornado instead.
The latter makes for a neat demo by letting giving you a web server in your pocket with the one command python -m SimpleHTTPServer. But this code was never intended for production use, and certainly not designed to be run as a public web server. There are plenty of real, hardened web servers out there that will run your python code as a WSGI script. Choose one of them instead.

#16: Using import array

import array

All the flexibility and ease of use of C arrays, now in Python!
If you really really need this you will know. Interfacing with C code in an extension module is one valid reason.
If you're looking for speed, try just using regular python lists with PyPy . Another good choice is NumPy for its much more capable arrays types.


#17: Split Personality

reload(x)
It looks like the code you just changed is there, except the old versions of everything is still there too. Objects created before the reload will still use the code as it was when they were created, leading to situations with interesting effects that are almost impossible to reproduce.
Just re-run your program. If you're debugging at the interactive prompt consider debugging with a small script and python -i instead.

#18: Copy is Almost Reasonable

import copy
The copy module is harmless enough when used on objects that you create and you fully understand. The problem is once you get in the habit of using it, you might be tempted to use it on objects passed to you by code you don't control.
Copying arbitrary objects is troublesome because you will often copy too little or too much. If this object has a reference to an external resource it's unclear what copying that even means. It can also easily lead to subtle bugs introduced into your code by a change outside your code.
If you need a copy of a list or a dict, use list() or dict()``because you can be sure what you will get after they are called.  ``copy(), however might return anything, and that should scare you.

#19: Admit You Always Hated It

if __name__ == '__main__':
This little wart has long been a staple of many python introductions. It lets you treat a python script as a module, or a module as a python script. Clever, sure, but it's better to keep your scripts and modules separate in the first place.
If you treat a module like a script, then something imports the module you're in trouble: now you have two copies of everything in that module.
I have used this trick to make running tests easier, but setuptools already provides a better hook for running tests. For scripts setuptools has an answer too, just give it a name and a function to call, and you're done.
My last criticism is that a single line of python should never be 10 alphanumeric characters and 13 punctuation characters. All those underscores are there as a warning that some special non-obvious language-related thing is going on, and it's not even necessary.
See also setuptools/distribute automatic script creation
and also PEP 366 pointed out by agentultra on HN

#20: Don't Emulate stdlib

It's in standard library, it must be well written, right?
May I present the implementation of namedtuple, which is really handy little class that used properly can significantly improve your code's readability.
def namedtuple(typename, field_names, verbose=False, rename=False):
    # Parse and validate the field names.  Validation serves two purposes,
    # generating informative error messages and preventing template injection attacks.
Wait, what? "preventing template injection attacks"?
This is followed by 27 lines of code that validates field_names. And then:
template = '''class %(typename)s(tuple):
   '%(typename)s(%(argtxt)s)' \n
   __slots__ = () \n
   _fields = %(field_names)r \n
   def __new__(_cls, %(argtxt)s):
       'Create new instance of %(typename)s(%(argtxt)s)'
       return _tuple.__new__(_cls, (%(argtxt)s)) \n
   @classmethod
   def _make(cls, iterable, new=tuple.__new__, len=len):
       'Make a new %(typename)s object from a sequence or iterable'
       result = new(cls, iterable)
       if len(result) != %(numfields)d:
       raise TypeError('Expected %(numfields)d arguments, got %%d' %% len(result))
       return result \n
   def __repr__(self):
       'Return a nicely formatted representation string'
       return '%(typename)s(%(reprtxt)s)' %% self \n
   def _asdict(self):
       'Return a new OrderedDict which maps field names to their values'
       return OrderedDict(zip(self._fields, self)) \n
   __dict__ = property(_asdict) \n
   def _replace(_self, **kwds):
       'Return a new %(typename)s object replacing specified fields with new values'
       result = _self._make(map(kwds.pop, %(field_names)r, _self))
       if kwds:
           raise ValueError('Got unexpected field names: %%r' %% kwds.keys())
       return result \n
   def __getnewargs__(self):
       'Return self as a plain tuple.  Used by copy and pickle.'
       return tuple(self) \n\n''' % locals()
Yes, that's a class definition in a big Python string, filled with variables from locals(). The result is then exec -ed in the right namespace, and some further magic is applied to "fix" copy() and pickle().
I believe this code was meant some sort of warning to people that would contribute code to Python -- something like "We make it look like we know what we're doing, but we're really just nuts" (love ya Raymond)
See also collections.py source code
and also an attempted fix pointed out by ffrinch on reddit

#21: Trying Too Hard

hasattr(obj, 'foo')
hasattr() has always been defined to swallow all exceptions, even ones you might be interested in (such as a KeyboardInterrupt) and turn them into a False return value. This interface just can't be fixed, use getattr() with a sentinel value instead.

#22: Off by One

'hello'.find('H')
str.find() and str.rfind() return -1 on failure. This can lead to some really hard to find bugs when combined with containers like strings that treat -1 as the last element. Use str.index() and str.rindex() instead.