Google is Omnipotent

The halo effect has graduated from inflating stock prices to making companies godlike. Thus, they can do anything – mere mortals can just speculate. The truth, however, is frequently mundane.

Taylor Buley, writing on the Velocity blog at Forbes, has the provocative title of “Google Isn’t Just Reading Your Links, It’s Now Running Your Code.” Mr. Buley goes onto explain that “for years it’s been unclear whether or not the Googlebot actually understood what it was looking at or whether it was merely doing "’dumb’ searches for well-understood data structured like hyperlinks.” In other word, Google has built a Javascript interpreter!

image

The source for this headline comes directly from Google:

On Friday, a Google spokesperson confirmed to Forbes that Google does indeed go beyond mere "parsing" of JavaScript. "Google can parse and understand some JavaScript," said the spokesperson.

So it’s confirmed, then.

Mr. Buler spends most of his article explaining that building a Javascript parser is really fucking hard. In fact, a quote from one of his experts isolates the key problem – how long the code will run – and says that “The halting problem is undecidable," There is no algorithm that can solve it. Well, OK, I suppose, but couldn’t you process a lot and cut it off at an arbitrary point? Sure you’d miss some stuff, but surely you’d get enough?

Actually, that’s what another expert says:

"It’s hard to analyze a program using another program," the person says. "Executing [JavaScript code] is pretty much that’s the only way they can do it."

Mr. Buler believes this is a great accomplishment, and quite unknown.

He’s right on one count.

In a previous post, I cited a paper “Data Management Projects at Google” and talked about Edward Chang. Well, the paper is actually about three projects, and one of those is “Indexing the Deep Web,” spearheaded by Jayan Madhavan. In that 2008 paper, Dr. Madhavan had this to say about Javascript:

While our surfacing approach has generated considerable
traffic, there remains a large number of forms that continue
to present a significant challenge to automatic analysis. For
example, many forms invoke Javascript events in onselect
and onsubmit tags that enable the execution of arbitrary
Javascript code, a stumbling block to automatic analysis.
Further, many forms involve inter-related inputs and accessing
the sites involve correctly (and automatically) identifying
their underlying dependencies. Addressing these and
other such challenges efficiently on the scale of millions is
part of our continuing effort to make the contents of the
Deep Web more accessible to search engine users

It would seem they solved this problem! (This is a big accomplishment). When did they solve it? Recently?

Well, sort of. In a 2009 paper called “Harnessing the Deep Web: Past, Present, and Future.” In it, they say this:

We note that the canonical example of correlated inputs,
namely, a pair of inputs that specify the make and model of
cars (where the make restricts the possible models) is typically
handled in a form by Javascript. Hence, by adding a
Javascript emulator to the analysis of forms, one can identify
such correlations easily.

So let’s back up.

What is Google going? They’re accessing structured data hidden behind form submissions. Now, we say the information is “hidden” behind form submissions because you have to submit the form to get the data. One approach – the ”dumb” approach – is to generate all possible result URLs and then crawl all of them.

But. Those clever folks at Google noticed this might be a problem:

For example, the search form on cars.com has 5 inputs and a Cartesian product will yield over 200 million URLs, even though cars.com has only 650,000 cars on sale.

The challenge, then, is making fewer URLs. Thus, they intelligently developed an algorithm with this property:

We have found that the number of URLs our algorithms generate is proportional to the size of the underlying database, rather than the number of possible queries.

How do they do this? Well, one big challenge is (as noted above) the inputs in one field can depend on the inputs in another field. Google has taken to constructing databases of “interrelated data” (like manufacturer and car model) so they can automatically detect the data the form wants and limit their indexing accordingly.

But to detect when some fields on a form are interrelated, you… need to have more than the HTML. In fact, almost all input-dependent forms rely on Javascript to change the values around after a selection.

Well, the clever researchers at Google knew they needed to determine which fields in a form were interrelated. They also figured that they only needed to determine this once, because once they knew which fields were related, they could automatically generate their URLs using their generation algorithms.

As you can imagine, if you only need to do it once (for each form), then it becomes practical to emulate. You emulate one form, and get 650,000 URLs to index with solid data. It’s cheap – so cheap, it’s almost worth getting a human to do it. (Except no Googler would think of that!).

But – and here’s the thing – to emulate the behavior of a form driven by Javascript you have to have the Javascript files. You need to download them, and then execute them.

In other words, the second expert Mr. Buley consulted is spot-on. Google is executing the Javascript code to find out something very specific (which fields on a form are interrelated, and presumably anything done in an onsubmit event that would alter the indexing URL).

This is not news. It’s publically available information – very easily, though Google Scholar, and even easier if you’re following Google’s main researchers – and there is no reason to resort to speculation to answer the question. They’ve been accessing the Deep Web – the web hidden behind forms – for years; Javascript is an obvious stumbling block; Google researchers have papers published on it (frequently presented at conferences!).

It is galling to see a reporter say that something is “unclear” when it is very difficult to make something clearer. In 2008, Jayant Madhavan wrote on the Google Webmaster Central blog talks about crawling through forms to get to the Deep Web – this stuff isn’t restricted to academic papers easily accessible through Google Scholar and surfaced in regular Google results. No, it’s even in the blogosphere.

I think I’ve gone a bit too far, so I’ll stop now.

General McChrystal: Exposed

image

David Brooks has a column today on the debacle created by the Rolling Stones report on General Stanley McChrystal. Ignoring the politics (why did the Rolling Stones reporter adopt that specific narrative, which he knew would result in political controversy?), there are a couple of points Mr. Brooks raises which I think are worth addressing.

The Psychology of Groups

First, he points out that it’s natural for people in small groups to complain about people on other groups as a way to relieve stress and build a sense of community. He is quite right about this; contemporary research in social psychology has demonstrated, time and again, that people (i) believe that their group is better, and (ii) have less awareness of people in other groups as people.

Possibly the best book on the subject, The Psychology of Stereotyping by David Schneider, goes into this in depth (I highly recommend the book). To summarize,

the “Tajfel effect” occurs when people have ingroup bias for no reason. The most obvious example in the real world is the attachment people develop to sports team from the cities they live in. Why do people associate so strongly with a certain team just because they live nearby? The most striking psychological example is the “minimal group” situation. If you take a set of people and divide them into two groups in a completely arbitrary fashion – say, flipping a coin – then even if you tell them ahead of time how the groups were divided, people show ingroup bias. That is, if you survey people they believe that their group is on average smarter, more attractive, etc. (This effect is robust, i.e. it holds true if you measure it in a different way – say, eliminating surveys).

The second effect is that people in the group will try to maximize ingroup differentiation (and minimize outgroup differentiation – e.g. “Those Muslims are all the same”). It’s a way, in essence, of “dehumanizing” people who are different from you. Why? Well, the easy answer is that everyone has limited cognitive power. You take shortcuts and, in general, while it’s very important to know how people in your own groups are different from each other, it’s essentially irrelevant to know the same information for people you don’t identify with. All you really need to know are attributes (or stereotypes) like “snakes tend to be dangerous” or “maggots are bad.” Applied to people, it can be “jocks tend to be dumb” or “politicians tend to lie.” Limited interactions mean you don’t need to know any more…. you just need to know enough to deal with them if you encounter them.

Thus, it’s not surprising that Genera McChrystal’s group exhibited (as Rolling Stones reported) “arrogance” about their own capabilities and denigrated people who (i) they didn’t deal with much, and (ii) whom they had nothing in common with. If you have something in common with someone, then you are by definition part of some group – and even if it’s a weak tie, the same ingroup/outgroup bias comes into play (just weaker, obviously).

In fact, it’s important to note that the fact that General McChrystal’s team exhibited such behavior. It demonstrates that he had a unified group. The Rolling Stones report really shows a healthy team. Why? Because the General was taking people from ostensibly different groups (computer geeks from MIT, special ops, soldiers, etc) and fusing them into a unified team. It’s very important to note that they complained about other people outside of the group. It would be very easy, in contrast, for the special ops guys to whinge about the computer geeks, or soldiers, etc. They didn’t: they complained about people outside the group.

That is, in fact, an indication of General McChrystal’s “greatness,” and is something – as Rolling Stones noted – which has given him such a reputation: the ability to pull people from multiple different backgrounds and construct a highly functional team.

The Cult of Personality

The second point David Brooks makes is encapsulated by this paragraph:

Then, after Vietnam, an ethos of exposure swept the culture. The assumption among many journalists was that the establishment may seem upstanding, but there is a secret corruption deep down. It became the task of journalism to expose the underbelly of public life, to hunt for impurity, assuming that the dark hidden lives of public officials were more important than the official performances.

I can’t really disagree with him: reading politics or watching the news is no longer about policies is about the personal lives of the politicians. “Would we like this guy if we met him in a bar?” or “Do we want to imitate this guy?” when, really, such concerns are patently irrelevant to the quality of the job they do.

Popular success is not determined on results; it is determined by personality.

Politicians have become celebrities. They establish a personal brand, and suck people into believe that they are a certain “type of person.”

It’s an interesting extension of the representative democracy model. We elect politicians who make political judgments for us – that is, they represent our collective interests, ideally. With the cult of personality, we elect people who seem like “our kind of people” and trust them to make the kind of decisions we’d like them to make. Thus, instead of judging a politician on how well they represented our interests, we judge them on whether or not we still feel like we have something in common with them.

Politicians as a proxy for people; personality as a proxy for ideology.

Of course, the sad thing is that personality is a terribly proxy for either ideology or effectiveness.

It’s also a classic lesson in Peter Drucker’s wisdom that “You can’t manage what you don’t measure” and “what’s measured improved.”

And as politicians learn how important their personality “brand” is, they become obsessed on maintaining that brand. But they found – particularly with the rise of TV – that managing their personality brand has very little to do with passing legislation. On the other hand, it has a great deal to do with (i) sound-bytes, and (ii) relationships with other politicians, and (iii) endorsements by famous people (and other politicians).

As such, the “personality brand” of politicians improves. But everything extraneous to that – e.g. actually reading the laws they’re voting on – deteriorates, because it has no impact on what’s being measured.

In fact, personality has become so important that people think the results (which personality is acting as a proxy for) is irrelevant.

Such is the fate of General McChrystal. He is – according to many on both sides of the political divide – highly capable, has proven success in multiple arenas, and is showing success in Afghanistan. Yet his personality has been judged lacking, due to the highly filtered view of it (and his team) provided by the Rolling Stones reporter.

Mr. Brooks is correct when he calls the “exposure ethos” damaging. But he misdiagnosis the problem – it’s not exposure, per say, it’s exposure of the wrong things. Actions are something we should care about – such as Nixon’s ethical abuses. Not “kvetching” as Mr. Brooks calls it; healthy team dynamics (that should be kept within the team).

Advertising Statistics Suck

This post is a continuation of my previous post How to Value Advertising.

Specifically, it’s a reply to Andrew Eifler who posted the blog post I responded to. He raised this point:

On the subject of variables and, as you point out, there can be quite a few – i think one of the biggest issues is how we quantify presence on each media channel. Universally the units that are used are “GRPs” or Gross Rating Points which are the product of “Reach” and “Frequency” against your target audience. For advertising measurement to really progress we really need a new unit of measurement. The system of GRPs worked great when the only media options were TV, Print, and Radio – but in today’s world, with such a fragmented media landscape, there really needs to be a more fitting measure. Maybe something like “Persuasion units?” Interested to hear what you think about this.

Andrew Eifler

In general, I doubt I could come up with a decent replacement statistic, simply because the data is so poor. I agree, however, that the current statistics you use – GRPs, and also TRPs – are woefully bad.

Why GRPs Are Bad

The statistic Gross Rating Points (GRPs) is calculated by multiplying percentage reach by frequency (is that average frequency?). Now, this is all well and good if all you’re interested in is "impressions" as in "banner ad impressions." But the experience of NON-obnoxious (overlay) and NON-personalized banner ads should lead people to be VERY skeptical of the worth of impressions as something useful.

0.23% Average CTR

Click-through rates on banner ads are what, 0.2-0.3%? If "actions" (like clicking through is an action) on TV ads are similar – I wouldn’t be surprised – that’s very low. And what’s the conversion rate after that? 5%? 10%? Depressing, but I suppose it’s beside the point.

A more important observation is that (i) people will have variable marginal returns to seeing an ad repeatedly, and (ii) the distribution of view frequencies are highly unlikely to be normally distributed through the population or segment. In the case of (i), I think that the marginal returns are likely to resemble a S-curve as well; of course if your ad is particularly irritating, it may tend negative after some additional inflection point.

I hope the current methodology takes (ii) into the account; I would expect some members of the population to be more likely to see an ad more times. I suppose you can mitigate this by segmenting the population in the right way (e.g. segment by number of times they have seen/are likely to see and/or respond to the advertising). Otherwise, you’re asking to be mislead.

The Problem of Advertising Statistics

However, there’s a deeper problem with such “industry standard” statistics that do not measure the result they measure the deliverable. Or: they do not measure what the customer cares about (increased sales); they measure what the advertising did. “We showed most of the people you care about this ad 3 times – and surveys (?) indicate that consumers remember the ad!”

Sure, I get that’s the best you do. You’re not selling increased sales; you’re selling something very specific. If the company gets increased sales, good for them; if not, well they decided to do it in the first place. Hell, figuring out how to grow sales is (allegedly) why company executives are paid so much.

But, it would seem to me, part of empower companies to make their own choices about how much to spend is giving the information they care about – “How much did the advertising campaign increase my revenue and profit?” – not the deliverables (which are only really of interest internally to the advertising company). Making the process transparent – letting the company buying advertising know what advertising was delivering to their market – may (i) seem like a good thing to do, and (ii) help justify fees to the customer, but it’s really not a very bright idea.

An Unfounded Extrapolation

The reason is simple: it reduces the profit margins of marketing companies. Oh, not in the short term – but in the long term. The selection pressures for marketing shift from making the most effective marketing to making the… well, less effective marketing. If you can make the process less effective, you get paid more. At best, efficiency will stop increasing.

$8mWhere?

The approach should be reversed. Marketing companies shouldn’t sell different product lines – “Yes, you can spend $3m on TV, $1.5m on radio, $1m on billboards, and $2.5m on online advertising, we can do that for you” – they should be selling increases in sales. Hell, if you really wanted to motivate an advertising company you’d get them some percentage share of the increased revenue attributed to advertising (though with provisions to prevent gaming).

Sure, yes, I know I’m both reaching a bit and have no empirical evidence to make such an allegation and information about how (digital) advertising is changing things implies the opposite. The amount of innovation occurring in digital advertising (albeit, sometimes creepy innovation) is staggering. The allocation algorithms behind Google’s AdWords and AdSense programs are mathematically guaranteed to lead to the most optimal outcome for all involved; the personalization possible with customer tracking (e.g. DoubleClick) is getting there; the shift to Actions and not Impressions is coming fast, etc. I just don’t like statistics like the GRP (unless I’ve completely misunderstood it…).

Screw the Literature

Of course, it’s not like the (academic) literature is any better. I dipped into a couple of (mathematical) marketing journals earlier today when I was researching the (earlier) response; I lost most of my references, though, and as this is not an academic paper I’ll refrain from re-locating them. The upshot, though, is that modern mathematical and economic accounts of advertising assume that (i) you can segment your population well, and (ii) that all segments are homogenous (note that (ii) implies (i)). Is this accurate? – I would be terribly surprised if the segmenting was that accurate.

Given these constraints, most models assume that (i) you represent advertising as contributing to some sense of “goodwill” among people, and (ii) in the absence of advertising the amount of goodwill a customer feels towards you or your product declines (which is a great way to justify advertising! –> apparently it’s a well-established empirical pattern; I’d like to check that).

This is known as “inventing a variable which doesn’t really exist.” The scientific justification for it is that “goodwill hypothetically relates to several variables – known and “latent” (which means “unobservable”) – all of which are correlated, such as goodwill in an “index” for those other variables.” Actually, what they would probably tell you is that goodwill is a latent variable that can be distilled via structural equation modeling from some empirically observable variables – but it’s a distinction without a difference. Mostly.

Now, I suppose goodwill is a better thing to try and measure than GRP. However, it’s still (i) artificial, and (ii) has nothing to do with revenue or profit. Thus, the literature is little better.

However, one good point (which I had not considered) is that given these assumptions, then the approach you take is influenced by whether or not you are trying to maximize goodwill at some point t, or if you are trying to maximize the integral of goodwill over the advertising campaign. The obvious example of the former is in selling tickets to some event.

Closing Thoughts

Measuring advertising is hard. So the tools you have are limited.

However, I think the statistics cited should have more to do with what the company buying advertising needs. That may be revenue and profit, of course – but it may be something else. I mentioned aspirational advertising in the last post; I have no idea how to measure that (except really crudely, with surveys and interviews).

And I think the statistics used should have less to do with impressions, unless you’re trying to improve effectiveness (“Our GRP is 250, but sales only increase 0.4%! Something needs to change!”). It’s certainly never something you should show the customer.

Statistics related to inputs are useless. GRP, and similar statistics, look pretty much at what you put into the campaign – just like college rankings look at what goes into the colleges (SAT scores, money, etc). They don’t measure outputs, e.g. how successful each college student is, or how much advertising increased sales. Why? Because it’s hard to measure.

But abandoning something just because it’s hard is no way to live; and adopting an inputs-based measurement process will do nothing but increase cost (like it’s done for the college industry).

Unless, of course, that’s what you want.


Also, I have to confess that the reply to this I wrote earlier was lost when my computer crashed… teach me to use something lacking autosave.