Please remember that my comments are my own and do not necessarily reflect those of my employer.
SEO, or Search Engine Optimization, is a technique in online marketing that deals with driving traffic to your site via search engine rankings. There are two main channels, active and passive, which differ primarily via ad spend (specifically, active SEO involves spending money on ads and passive does not). From there, there are a bunch of different ways to improve your credibility with customers: have a targeted domain name (e.g., MyBusiness.com), have meaningful paths within your domain name (e.g., MyBusiness.com/BusinessUnit/WorkingGroup), ensuring that the words on your site are relevant to your topic, or having other sites about your topic linking to your site (for example, a site about puppies being linked-to by a site about Beagles), which is referred to as organic traffic. I learned a lot about SEO when I worked at Terralever, where as a developer I did a lot of work to implement the SEO designs our marketing team put together. SEO in and of itself isn't a problem; in fact, it's important to allow sites to be found by search engines.
But SEO can be abused. Search engines have rules that detect when website operators are trying to break into their heuristics, and if these rules are violated it can lead to a black-listing of your site or business. One of the most famous occurrences of these was JC Penney in 2010-2011, where Penney had hired an SEO firm to optimize their inbound traffic. The SEO firm in question created thousands of other sites, linking to Penney's main site, and driving the site to the top of Google's rankings. When search engines' crawlers detect inbound links to a site from other sites about the same topic, that's termed an organic link, and it substantially boosts the credibility of the site. Google famously (at least in marketing circles) punished Penney by substantially reducing its ranking (one example was from #1 to #68).
I know a lot about SEO from my old days working in that field. And yet, I was still fooled by this one.
My wife and I have been talking about adopting a puppy for about a year and a half. I've never raised a puppy, and our other dog, Samson, recently turned four years old. With us hoping to have a human child soon, Meredith thought that if I ever wanted to raise a puppy (for it to still be my dog), we should do it soon. I've wanted to get a Beagle for a long while, and after having been watching local shelters for a Beagle puppy for a long while and rarely if ever seeing one come available, we concluded that we'd need to find a breeder. I'm not generally a huge fan of breeders, but I also don't necessarily feel bad going to one if I think the arrangement will be good. We did some searching, found a site called Washington Beagle Breeders, and found a pup we thought would be a good fit for us. We chatted to one of the reps for a bit, and Dylan sounded like he'd be a good pup for us. And the arrangement with the people from the website seemed pretty cool - a way for breeders to just go through an internet presence and not have to worry about managing a website and all of that on their own. I thought, hey, if I was a breeder, that's a service I'd probably like!
When we were making our final decision, we were missing one crucial piece of information. One piece that was explicitly deceptive about the website.
Dylan wasn't from a breeder in Washington. Dylan was from a breeder outside of St. Louis, Missouri.
By now we were emotionally invested in going forward with Dylan. The breeder's rep informed us that there would be an additional travel fee on top of the base price of the dog, that we wouldn't be able to meet Dylan before the adoption, and that we'd be on the hook for a return fee and a service fee if we decided not to move forward with the adoption. I couldn't imagine we'd want to return the dog, but that put me off. But by now, like I said, we were pretty emotionally invested, and decided that would be okay. I figured it was within the same general measure of distance of flights we usually take - 3-4 hours - and that wouldn't be too bad. We arranged the travel and planned all of the timing, etc. Dylan was scheduled to arrive this afternoon.
This morning, checking the flight status, I found out:
- Dylan wasn't on a direct flight from STL to SEA, but instead had a connection in Salt Lake City.
- Although he was scheduled to arrive at SEA at 12:20pm local time, he was dropped off at 10pm the night before in St. Louis for a 6:45am flight out.
If you read up on raising puppies, you'll find out that puppies generally need to go to the restroom every 4-5 hours. You might be lucky enough to have a dog who can sleep through the night at 13 weeks, but you need to let him out. But by my calculations, he was in his crate for at least 15 hours. That meant he slept in his pee and poop for a long while. He's got a lot of energy and can't get out of his crate.
Keeping the dog confined for that long, I'd understand. I think it'd suck, but I get it. But since the dog is unable to control his bowel and bladder for that long, I think it's just rotten.
Never again will I go through an online breeder. We'll only ever go through a local rescue or a local breeder.
I want to show you the specifics. Check out these sites:
Will I ever do business again with a breeder? Maybe. I don't think breeders themselves are the problem. And I even think a middleman isn't necessarily bad -- like I said, I probably would have been excited to find a broker like this if I were a breeder. If Purebred Breeders (the company really behind all of these sites) was actually based in Washington, I wouldn't have had a problem with them. But the fact that these dogs aren't actually in Washington, or California, or Arizona, or New York - and the fact that the sites work so hard to make it feel like they are (Washington Beagle Breeders' site uses a Washington-based phone number - area code 206) - it just disgusts me.
And I'm disappointed with myself for not having realized that I was being played.
All of that being said, happily, my puppy and our current dog seem to be getting along just splendidly now that they've settled in:
Please be reminded that what is written on this blog is my personal opinion and not that of my employer.
So today, I went to install BD Advisor from CyberLink. I was trying to play Vudu content on my 27" iMac (running Windows 8 of course), and was having inexplicable trouble with HDCP (I know that the still-relatively-modern AMD Radeon in the machine is HDCP compliant, as should be the DisplayPort-to-DVI adapter I use for my second monitor). So, after a few forums I looked at recommended BD Advisor, I went to get it. Bing presented a couple of options: Cyberlink's site, and Cnet's Download.com. I went for Cnet because I figured that it was more likely to not require me to enter my email address.
I remember back in the day, my friend Charles and I used to surf Download.com *all the time*. We used Download.com before it was Cnet's (which is forever ago in internet years), and even since then it was a pretty reliable repository for downloadable goodies and freeware. Usually, that freeware was unsullied.
I went through the download process, but instead of getting the BD Advisor installer, I got the "CBS Downloader" or something like that. But I let it go, not seeing another way to get the app I wanted. And it prompted me to go through what I thought were routine license agreements. But when I clicked Accept a couple of times, and my IE instance disappeared, I got to thinking that something might be a little fishy. So I looked a little closer.
It was prompting to install junkware.
I know it's junkware, because the text saying that it's what it's installing is really, really small.
Now, that might not look terribly small to you. Trust me, on a 27" screen at 2560x1440, it's pretty small:
So, recognizing the threat once I had clicked through a couple "Accept" buttons, I fired up Programs and Features, and whatever did I see?
Oh hey, look at that, things I most certainly did not intend to install on my computer! (Wajam and Coupon Companion Plugin)
Even BETTER! When I went to uninstall Coupon Companion, whatever that is, I caught it trying to install MORE spyware from its UNINSTALLER!
"Basic Seek, which fixes DNS errors..." no doubt by overriding your default DNS configuration and points it to something which can track your DNS requests. That's nice.
Happily I was able to quickly get rid of that junkware, and I did get BD Advisor. But look again at the installer:
See how its "Decline" button makes it look as if hitting Decline will cancel it out? I mean, you could make the argument that "Close" would do that, and therefore "Decline" wouldn't also, but given the green "go forward" button of "Accept" having a very clear opposite of "Decline," my initial reaction is, very understandably, that Decline means Stop.
For shame, Cnet. I have come to expect more from you over the years. Today, I'm very disappointed.
My Dilbert app for Windows 8, the Modern Dilbert Reader, just got an update that is pending certification for the Windows Store. This change fixes a bug that a few people have periodically seen where strips lie on top of each other. It also speeds up search.
Best, it uses the higher-quality images available for the main strip display.
I'll note when the new version gets updated, but you can download today's version and the Store app will let you know when the new version is available.
The "Modern Dilbert Reader" app is not endorsed by my employer.
A week ago, on October 26, Microsoft released the Surface, along with Windows 8. Because this was my first product launch (other than TypeScript, though that technically wasn't my product), well, I just couldn't help myself: I waited in line at the Bellevue Square Microsoft Store and got mine on release day. It was a madhouse:
I got a 32gb device along with a Type Cover, the thicker one with the mechanical keys. I believe that the specs for the Type Cover say that it's 5 or 6mm thick; okay, that might be twice the thickness of the Touch Cover, but it's still tiny.
Not only is it awesome for taking to meetings, but I can actually get work done on it. I can't code, of course; well, at least, not at the office, since I can't RDP to my desktop from it. But I've spent the whole week taking it to meetings, taking notes, and writing specs. I can't believe how snappy everything is.
Now, let me set the stage a little bit more. When I started at Microsoft in April, I was assigned a Lenovo ThinkPad X-Series convertible tablet. It supported multi-touch, has a keyboard, an extended battery life of about 6 hours, and on day one I was able to install a daily build of Windows 8. I've been using Windows 8 on my desktop PC (no touch) at the office since then as well, and in August when we RTM'd, I installed it onto my home iMac (of course no touch), which I previously dual-booted with OS X and Win7.
I didn't really understand how cool Windows 8's touch functionality was until I got the Surface.
Now, this isn't to say that partner / OEM vendors don't have worthwhile devices. I haven't played with them at this point. I hope they're as good as the Surface, or even better, because if they're better, then, DANG. But all that having been said - I have to give my kudos to the team behind Windows RT and the Surface. They have done a remarkable job.
If you haven't experienced Windows 8 on a touch device yet, get thee to a Microsoft Store. Try it. The productivity losses of being completely addicted to Jetpack Joyride will definitely be offset by the productivity gains of being able to flick a Word document up to scroll, and then just pointing the cursor into place on the screen rather than using the mouse.
And to be honest, as much as I have been thinking I might want to code on the Surface, in retrospect, I'm not sure that's true. I have a 17" laptop and a 27" desktop screen, and I use that screen real estate judiciously when I'm coding.
OK, who am I kidding? I'll be getting a Surface Pro whenever it comes out. But until then, I'm having a good time on this awesome tablet!
I guess I see the inverted 8, but - what's up with the fish logo for Windows 8 Consumer Preview??
Oh, I see the 8 blended in the background too...
Balsamiq was kind enough to provide me with a free license for Mockups a while back on the condition that I reviewed it. I'm embarrassed to say it is WAY longer than I would have expected if I had been the one providing the license. That said, I definitely still owe them the review, so I think I'd better get on it! (I had originally intended to do a videocast of using Mockups but I'm not particularly good at that sort of thing).
My daily job responsibilities have never been 100% on wireframing or feature design. Up until I was introduced to Mockups, I had tried out Axure RP Pro (I think around version 5), and during my time at Terralever, we used Omnigraffle extensively. I first was introduced to Mockups when we were partnering with MySpace on If I Can Dream. I was impressed initially by the extremely rapid prototyping that we were able to do as a collaborative team. I had spent probably about 30 hours doing the initial Facebook-based proposal, which was, in fairness, far more extensive than the MySpace integration ended up being (sorry JT). That said, we were able to put Mockups up on the projector and it was really easy to make some interesting UI components very quickly.
Since then, I've used Mockups at Terralever as well as with my own consulting business. Generally, I've found that Mockups is ideal when I'm looking for:
- Very rapid prototyping, to help conceptualize something that I'm actively coding and just need to give myself a fixed visual.
- Prototypes that I'm going to deliver in PDF format, because of the automatic internal linking it provides (more later).
- Any kind of prototypes that are going to be "internal only." If I need to perform a "real" usability consultation, I prefer Omnigraffle; I think that what I can find in Omnigraffle generally looks a little more polished.
The Prototyping Experience
Absolutely exceptional. To be honest, I think it's the best aspect of Mockups in terms of its speed advantage. If the components are there in the library, it's incredibly fast to go from nothing to something good. Let's take a simple example of a common navigation element: the tabs list. Drag-and-drop from the library onto a fresh canvas, and this is what we're presented with:
You can see that there are four items in a comma-delimited list. What could be more intuitive than that? Well, I'll type some comma-separated items in and then hit enter:
Okay, that's pretty good; but I think I should be able to have the Mockups tab selected. Lo and behold, the very simple property editor window saves the day:
You can see in this image that Mockups is selected (and it is highlighted in the "Selected" field of the property editor as well). But what's also very neat about this control (and other similar ones) is the "Links" property set. See how there's a field for each one of the tabs that I created? Every one of them can link to another Mockup in the same folder. When I choose to export them to PDF, those links will be live, and I'll be able to navigate between mockups by clicking on the linking element. In my opinion, it's one of the coolest interactive features, and it's readily surfaced. (I suspect that there may be similar features in Omnigraffle but I've never used them if they do exist).
Also very cool for preserving your "sketch" feel is the ability to "Sketchify" (okay, that term isn't in the UI, but it fits) any bitmapped image you drop into the app. It's actually very good and accurate. Take a look at this (original images courtesy of Amazon and Google Shopping).:
This is tremendous for those of us who want to use images to get the point across but are terrible at using Photoshop for anything other than taking someone else's image and using the "Slice" tool.
I've found the locking and ordering mechanism to be very intuitive. Mockups is also very good about letting you select the element you want with the right-click menu, which lets you select down the visual hierarchy even if the element you're trying to get (for instance, a Web Browser surface) is occluded by something else.
The UI library is adequate. There have been times where I've longed for the ability to add a control to it, which invariably leads me to creating a "Library" mockup that contains my custom "controls." The drawback to this model is that you don't get the very nice programmed editing capabilities. For example, suppose I wanted to wireframe the feature that Facebook uses when you've auto-completed a friend (you know, the highlighted background with an X to the right of the text). I can make something that looks like it. However, I don't get the great features of the intrinsic controls: I can't make only the text editable, and I can't make the X hotlink to another wireframe. There may be a way to develop these kinds of extensions, but I haven't found it. The lack of extensibility in this sense is a detractor compared to Omnigraffle (compare Graffletopia), but at the same time, it's a good challenge to be able to do more with less. (I love Graffletopia's diversity in components, but it can also give you a library that is much, much too large).
Still, I think that, while there have been times that I've really wished for some specific controls, it's generally given me what I've needed. And when it didn't have quite what I needed, I was able to fake it.
That said, there is one thing that stands above the others: the Text Paragraph control:
You can see inline the italics, hyperlink, and bold format specifiers that can automatically be applied. I've found this enormously useful and I think the inline links are particularly handy (it's not something you see everywhere). But without a doubt, the best part is that you can get them while you're typing: no need to stop and use the mouse to select a text range just to get a special format like that.
How it stacks up
I realize that as someone who almost exclusively develops on the Microsoft stack, I should probably at least occasionally wireframe in Expression Blend's Sketchflow feature. I'm mildly ashamed to admit that I've never tried it; I think I'm afraid that if I start working in Sketchflow I'll get the urge to do some real programming, or I'll spend too much time on an animation and not get the actual prototyping done. So with that in mind, I'm going to compare it to Axure and Omnigraffle, because those are the other two tools I've used.
Axure: Mockups, even the first time I saw it, was miles ahead of Axure in appearance. Noted that it's probably 2 years ago that I tried Axure, but it sometimes felt like I was literally working with the .NET Windows Forms or Web Forms designer controls (the radio buttons stand out in my mind as looking like they were pure Windows-based controls). I have seen the end-result of some Axure work recently and was a bit impressed (I think they've added some sketchlike appearance support in a recent version). That said, I get the general impression that Axure's prototypes are meant to look a bit more like the real thing than wireframes that you might generate using Mockups or Omnigraffle.
Omnigraffle: Like I noted above, I tend to like Omnigraffle when I'm consulting for a client. Mockups produces visuals that are very sketch-like in their appearance, and while this is generally a Good Thing, sometimes that rounded edge or drop-shadow that you can include in Omnigraffle makes the product really pop. Still, Omnigraffle is only available on Mac and iPad, and while I have an iMac, I tend to be developing in Windows. The reboot required to go into OS X makes it cost-prohibitive to use when I'm going to be the consumer of the wireframes. Mockups, being an Adobe AIR application, can run on either platform. As a result, when I need to give myself some visual guidance, I tend to do my rapid prototyping in Mockups.
One note: Since installing Mockups on my new laptop, I was surprised to see that Comic Sans is no longer the default font. In some ways this saddens me. I realize that most customers were probably unhappy with that as the default because it made them hesitant to show their mockups to customers. I thought that it was a bit charming in its own way - another not-so-subtle reminder to "not take this visual too seriously". Oh well
Thanks, Balsamiq, for the Mockups license! Keep the goodness coming!
Probably 99.99% of HTML applications and websites are served over HTTP exclusively. (I'm referring here to HTTP as a transport protocol, not HTTP vs.HTTPS, for example, and I realize that HTTP is an application-layer protocol according to OSI; but developers generally treat it as an abstraction for "the network"). As anybody who has done web programming knows, HTTP is a stateless protocol; that is, it's based on a request-response model, and in general, one request has no knowledge of previous requests. This has posed some challenges for web developers over the years, and some brilliant abstractions of state on top of the statelessness have been devised.
The hard part now, though, isn't to deal with statelessness. It's dealing with the request-and-response model.
All network communication is inherently request-and-response. There are some applications that utilize full-duplex communications to get around that (think of chat software), but for the most part, that isn't really available behind the firewall. Web sockets are still yet to be standardized (and there are some questions about long-term compatibility with WebSocket-ignorant proxies). And typically, corporate firewalls say no to outbound connections except on ports 80 or 443. Some applications (think Meebo) have been able to get around this limitation by cleverly using long-timeout delays on AJAX requests. The client makes a request to the server, and the server either responds immediately (if an event is in queue) or holds the request for 30-90 seconds to see if an event comes in. I even did this once myself with good success, although I never took that app into production. (There was also some question about the total # of clients an ASP.NET server could sustain whilst holding threads in that way).
In many respects, Windows developers haven't had to deal with this. We could issue synchronous requests, and the UI would stand still for a second, and either it would work or it would fail. But usability concerns over this process, as well as issues with high network latency (imagine pressing the "Submit" button and having to wait 20 seconds while your app freezes - by then, I've force-closed the app) have seen platform providers decree that asynchrony is the only way to go.
HTML isn't the only application provider dealing with this limitation. Adobe Flash has long had an asynchronous-communication-only model, Microsoft Silverlight has also carried on this principle; of course, these two applications have lived predominantly in browsers, where a hanging UI probably means interfering with other apps as well as the one making the request. Interestingly, WinRT - the Windows 8 developer framework - is also going to mandate an asynchronous model, following in the Silverlight-based foodsteps blazed by Windows Phone 7.
So as we trek out into the world of asynchrony, well, we have a whole mess of questions to deal with now:
- If there's an error, does it show up in the calling method or in the callback method? Does it even show up?
- Does a network (transport-level) error surface differently than an application error? What if the server returned an HTTP 403 Forbidden response?
- What are all of the different kinds of errors that can crop up? Do I need to handle SocketException or is that going to be abstracted to something more meaningful to my application?
- What do I do if a network error comes up? Do I assume that I'm offline, panic, and quit? What if my application only makes sense "online"?
- Do I surface an error to the customer? Silently fail? I might generally fail silently if I'm a background process, but then again, what if it's an important one? What if the customer thought he was saving his draft while all along it was offline, and then the customer closes the browser?
- During the async operation, should I show the user a timeout spinner or something to that effect?
- How should I design my async operations? For example, consider a Save operation. Should I capture all of my state at once and send it off, and let the user immediately keep working? Should I make the user wait until saving completes? Should I even use Save, or automatically save whenever something changes?
- If I use auto-save, how do I handle undo? What if I want to undo between sessions? Is there a way to go back if the hosting application crashes? (Worst case scenario: the user accidentally hit Select All, Delete and then the browser crashed after the auto-save).
- Write a singleton object? This might be easier and afford strong member protection, but I can only have one widget, unless I somehow differentiate between them and multiplex, which can become hairy quickly.
- Should the monitoring function accept a callback, or should it be event-based, so that multiple subscribers can listen? (Maybe an event-based model offers some interesting ways to deal with the complexities of a singleton?)
- Should the widget manipulate the view directly, or should I write separate code that handles the view based on the state of the object (or objects)?
The list goes on.
We're moving faster and faster into an asychronous world. It is already happening, and we as developers need to be prepared to handle these difficulties. We also need to understand how to communicate these kinds of questions to our business analysts, our supervisors, and our customers. We need to be able to equip ourselves to ask the right questions of our customers, so that when it's time to make a decision, we have the information we need.
I don't use Firefox very often. Generally I'm in Chrome or IE. But, I keep Firefox on my taskbar, because I do enough web development that it's prudent for browser compatibility testing.
Every now and then I get this dialog:
Note that when I'm not taking a screenshot, that first item has a blue highlight instead of gray.
What's the problem?
Well, first of all, it's a popup. It has its own window, which in my mind, means that Firefox thinks I ought to do something about it. The only action that appears to be there is the "Find Updates" button; but of course, that doesn't find updates to the plugin I just installed, which is the purpose of the dialog in the first place, isn't it?
No. In the screenshot, the Java Console 6.0.27 is the newly-installed plugin because I just installed Java (with MonoTouch and MonoDevelop).
Mozilla, this dialog sucks, and has sucked since I saw it back in the early days of Firefox. Take a lesson from IE, and if you want to show me a notification, put it in a bar. Make it say, "The 'Java Console' add-on has been newly installed." Or, "4 new add-ons have been installed; click here to see this list."
At the heart of this problem, though, is a cultural difference; I strongly believe that the *nix culture is reflected in this user experience. It was as if someone said, "OK, well, we'll run firefox --check-plugin-updates | firefox-ui and the result was piped to a new window, because it had nowhere better to go.
That's not to say that command-line piping to the UI is a bad thing (though I believe that it's better to make the programmatic interface between two subsystems, well, programmatic, not text), but if someone had actually spent some time designing the user interaction for the "New plugins installed" use-case, they would not have decided to simply reuse the "Show all installed plugins" UI.