On SEO Scepticism

I think I’m going to form a new society. It’s called the “SEO Sceptics Society“, but it’s not about being sceptical of optimising sites for search engines, but sceptical about the industry that has grown up around it.

To explain.

Google is by far the world’s most popular search engine with around 80% market share which, as an aside, makes you wonder why competition monitors haven’t had a crack at it yet.

The fact that its very name has become a verb meaning “to search” must suggest that it gives searchers the best, most relevant sites in response to their search, ranking results by relevance to the search term. So how does it do this?

Firstly, it encourages sites to think about the key words or messages which they want each page to convey and to use these messages in the text, the headers and the page titles. This makes it easy for Google to index the pages, preferring them to be clean, well-written and fast-loading.

Great, so far nothing to be sceptical about. But, Google’s real selling point is the concept of a site’s authority.

Let’s imagine that I wrote the best article ever on the 2008 banking crisis. An article so good, so definitive, that Robert Peston and the Bank of England themselves linked their sites to it before giving up their day jobs. Google would already know that their sites are authorities in their fields (because of the sites that link to them) so for them to link to me would be a massive vote and my site’s authority, or Page Rank, would shoot up. Thus, the next time you Googled about the banking crisis, my site would be at the top of the results page. Still no reason for the scepticism. Assuming that my motives for writing the article were academic or informative, then I would have earned links and, therefore, authority.

But what if I wasn’t trying to earn links, but to bait them? What if I wrote articles on my site or blog just to attract links? What if I sought to procure links to my site simply to bolster my Page Rank and, consequently, traffic and possible sales?

Less optimisation, more manipulation.

Although Google frowns on the widespread practice of paying for links, even seemingly innocuous techniques like blogging, commenting on forums and other link-building campaigns have spawned a lucrative industry. By paying specialists to create or bait these links, it is possible, common even, for low quality sites which offer little real authority or value to rank top of Google.

Now, please don’t get me wrong, I’m not suggesting that you should not create content or blogs to inform or communicate and therefore earn links. My issue is with the idea of creating content solely to attract, justify or embed links.

Inadvertently, Google has created a monster. An industry that it cannot control and yet which is totally dependent on it.

Illegal? Not at all. Unethical? Not really. Immoral? Nah. Wrong? Just possibly.

Now see why I’m sceptical?

Making sure my data is readable in a hundred years

Having spent some twenty years researching my family history, I obviously want to make sure that the fruits of my work are accessible to the generations that follow, so how do I ensure that it is all readable in a hundred years?
When I started my research, in the days before PCs, Macs etc, a colleague invested in a Philips Videowriter – basically a huge CRT-based box with built-in thermal printer. It could perform just one task – word processing – using a proprietary format and a 3.25″ (yes, 3.25, not 3.5″) drive. Within a couple of years, it died and could not be repaired. The disks were unreadable and, worst of all, all the hard copies had faded as thermal prints are wont to do. All the data – years of research – was lost. I am determined that this won’t happen to me.
The first thing to do is define what sort of data I am talking about. I think it can be divided into two categories:
Hard copy media – printed copies of research, certificates, old photos, etc
Electronic media – scans and source files, photos, databases, research notes etc
Hard Copy Media
Preserving old documents is a science in itself, so apart from scanning, covered below under electronic media, I won’t attempt to discuss that here.
However, I have produced a book containing biographies, research notes, images, photos and family trees – can I assume that it will survive? The problem is, modern toner-based laser prints on re-cycled, generic photo-copy paper are intended to be quick and cheap, not durable. No one really knows how long the toner will remain stable. Ten to twenty years shouldn’t be a problem, but beyond that?
Electronic Media
This subject has two specific aspects: format and storage.
Twenty years ago, the standard word processor was Word Perfect and images were stored as 8-bit GIFs. Now, it’s more likely to be Word and jpegs and, along with Adobe’s PDF format, it is probably no exaggeration to say that there are literally billions of jpegs, pdfs and docs in existence, so even when the standards are superseded, it is likely that those files will be readable, even if only by libraries or specialists. Similarly, the family tree databases are also likely to be readable. Although each family tree program uses a different format and structure which can change from version to version, there is an industry standard specifically for family tree databases managed by the LDS(1). The format, GEDCOM, is ASCII-readable yet maintains names, facts and relationships.
The media on which it is stored, however, is a different matter.
A few years ago, 3.5″ and 5.25″ discs would have been the norm, but now few people could read either. Since there are probably billions of CDs and DVDs in existence, it is likely that readers for them will exist in the future (even if only in libraries etc), but dye-based CD and DVD ROMs were never intended to last beyond ten years and it is thus unlikely that they will readable in a hundred years – I have already had some fail after 12 years.
PATA and SATA hard drives are already being replaced and USB2-based drives will die off for USB3 which will, in turn, go the way of SCSI, PCMCIA and Firewire. I also doubt that flash memory such as Compact Flash, SD etc will survive as a mainstream format for more than 20 years. Online storage, either using cloud-based virtual drives or hosting research on resources such as Ancestry are great…for as long as you pay the subscription or as long as the hosting company exists. Even if Ancestry survives a hundred years or, more likely, some other online repository is created in its place, how will anyone know our data is there?
It is clear that there is no perfect solution. For data formats, sticking with widely used standards makes sense, and I would encourage genealogists to regularly back up their databases in the Gedcom format. However, the only solution that is truly future-proof is to continually port the data into the new formats and media as they emerge.
(1) LDS – (The Church of JC and the) Latter Day Saints – vast resources employed in genealogy making them a key mover in genealogy technology – perhaps less so since the introduction of paid-for services such as Ancestry etc.

On the Ipad

Now I don’t like to be part of a crowd. I don’t follow any trends, run with any packs or, I’d like to think, don’t fall for the hype.

And yet, I couldn’t ignore the Ipad completely, could I?

So, the first surprise is that it is a super-sized Ipod Touch or Iphone, depending on which you choose.

That’s it really.

It runs the same apps, it looks the same but bigger, and it runs the same operating thingy.

And, you know what? That’s why it will be a staggering success. Because 70 million (70 million!) of us already have the baby version, love it, know how to use it, and have wondered why our “proper” computers couldn’t be as good.

Think about it: web browsing, video watching, music, games – yep, that’s about 95% of the use my laptop gets, tethered to the charger. And then there’s the document reader. Oh, how wonderful to have a screen shaped like a sheet of paper on which I can read books, active news sheets, reports and hold it like I would a book!

Ok, like many people, I joined in the live event and was a little disappointed… until I thought about it and realised that Apple have got it just right.

Do I want one? Yes, oh yes.

Dear Steve Jobs

Dear Steve Jobs

I’d like to say thank-you for my Iphone. No, it wasn’t a Christmas present, I’ve had it since January 2009 and I LOVE IT. I just haven’t gotten around to thanking you yet.

However, the thought occurs that my contract with O2 expires in July, so that would be the perfect time for you to bring out a new model, wouldn’t it?

I was wondering though, if you could see your way clear to adding a couple of things.

For instance:

– Customised e-mail and text alerts because, frankly, the current ones are pants

– Synching over wi-fi; my computer connects wirelessly to the same network as my Iphone, so why do I need a piece of wire to get them to talk to each other?

– A better camera because, frankly, the existing one just isn’t anything. A better lens, zoom and a bit more resolution would be a nice.

– Better battery life. Yes, I know that the 3GS is better but still not great.

– An SMS outbox when for when there is no reception and automatic re-send it when there is a good signal.

Yep, that just about covers it.



On the Future of User Interfaces

What was the name of that movie? You know, the one with Tom Cruise?

“Minority Something”. Report, yes – “Minority Report”. He stood in front of this huge transparent screen and sort of dragged photos and video around like this (waves hand) and then he pulled the corners to make them bigger and then he…..
Spielberg apparently based that iconic sequence on conversations with Microsoft, so perhaps it is no surprise that Hollywood’s one-time virtual reality is now nearly real.

Microsoft recently created “Surface” a table-based computer with a horizontal screen that combines multiple, simultaneous touch inputs, gesture recognition, object and tag recognition and advanced graphics – and, yes, you can drag and resize objects like in the film. Surface is clearly targeted at multi-user interactivity: bars and restaurants for interactive ordering and playing; retail outlets for interactive catalogues and the corporate world for presentations and briefings. Powerpoint presentations will never be the same again.

These applications of Surface are rich in “ooh, that’s clever” moments, impressing with design and the user experience.

Apple’s Iphone brought gesture recognition to the consumer’s pocket and, almost overnight, the ubiquitous desktop and laptop looked slightly old-fashioned. There are, however, millions of personal computer users in the world and almost every one of them uses a keyboard and mouse.

Enter Windows 7 and Apple’s Snow Leopard, which support the growing number of multi-touch devices on the market. Windows 7 brings pinch-to-zoom and tap-and-drag control to monitors, overlays and laptops while Snow Leopard supports similar gestures using mice and track-pads.

But users are comfortable with their software working in a certain way – simple, point-click control of pull-down menus which have almost become standardised. It will take something very special to change that. So, while multi-touch technology is clearly suited to tablet computers and smartphones, it remains to be seen if it can find a use in homes and, especially, offices.

Once again, the latest Iphone is a trail-blazing example of “augmented reality”. Point it at a street scene and the built-in compass will overlay heading and directions on the camera’s image. Soon it, and devices like it, will overlay information about the buildings around you, recognise faces in the street and allow us to interact with our environment in ways we haven’t even thought of.

Nintendo’s Wii brought a form of virtual reality to the console gaming market with advanced gesture control, proving that the sheer immersive joy of realistic interaction – such as swinging a virtual tennis racket – is at least as important to the mass market as sumptuous, high-resolution graphics, a victory for function (and fun) over style.

So what is the future for user interfaces? I rather think that there isn’t just one future. Office computers will continue to develop using desk-bound mice and keyboards as hand-held devices and laptops evolve towards gesture-based control, offering innovative ways to interact with their environment.

Multi-touch screens will become common-place in the home, in the hand and for multi-user devices such as Surface if, and only if, the design is good enough to last beyond those “ooh that’s clever” moments and doesn’t, once the novelty has worn off, interfere with functionality.

Of course, we all hope that the future of user interfaces is much closer to science fiction. We want projected holographic images (“Help me Obi-Wan, you’re my only hope”) and virtual-reality headsets (whatever happened to them?). We want computers to react to our eye movements or our thoughts but all this, as well as Tom Cruise’s screen is, for now, still science fiction.

But probably not for long.