The Amazon case is an interesting one because, despite the appeal of the OP's argument, one can hardly deny the success of Amazon's product listings in spite of their ugly URLs.
However, this raises up an important consequence of clean URL design: when you're offering things that may be classified in several categories, it requires good design on the backed/framework to make sure your URL taxonomy isn't overly constricting. For example, example.com/toys/Nintendo-wii or example.com/consoles/Nintendo-wii?
Either one is legit but creating and keeping consistent taxonomy is difficult enough on its own without worrying simultaneously what the URL looks like
> "example.com/toys/Nintendo-wii or example.com/consoles/Nintendo-wii?"
Why not both? The URL is just a URL - it does not need to reflect your underlying data model. There would probably be a canonical URL for use when the category context isn't available (say, "consoles"), but why not have multiple URLs lead to the same information?
> "despite the appeal of the OP's argument, one can hardly deny the success of Amazon's product listings in spite of their ugly URLs"
But they're really not ugly. In fact, given the complexity of the system they represent, they are remarkably human-friendly.
In an ideal world all ideas, all businesses, and all use cases can be fulfilled by simple URLs like "example.com/shockingly-unique-identifier", but we don't live in that world. Amazon has constructed human and machine-relevant URLs. The author's argument can be applied to many sites, but I don't think Amazon is one of them.
> it does not need to reflect your underlying data model.
Absolutely! In the REST parlance, resources (which are what URIs point at) do not map 1-1 with entities (which is your internal representation of business objects). If they do, you're quite possibly exposing too many internal details and making your application brittle.
> More precisely, a resource R is a temporally varying membership function MR(t), which for time t maps to a set of entities, or values, which are equivalent.
I don't disagree at all, I'm just pointing out it requires an extra layer of logistics and maintenance that may outweigh the benefits of the beautiful URL. And in the rise of URL shorteners and the proliferation of UrL sharin/discovery via social media, I'd argue that the beauty-effect of URLs is even further diminished.
Just a couple months ago, I pushed out a major update to classiccars.com (it's still pretty messy) that cleaned up/reduced the url structure.. the old system had like 50 routing rules to support a wide variety of "friendly" urls for searches, and listing display... I reduced that to /listings/find/YEAR(s)/MAKE/MODEL?opts where all search params other than year/make/model were not part of the route, but querystring params... it didn't make sense to support all those routes (not to mention the old results paged via postback). (hint, you can set the page size up to 100 via a ps=100 querystring param, there's a few others not in the UI yet)... the UI is very similar to how it already was.
The new routes make more sense, and are imho more friendly. The same for listings/view/###/STUB though I put the number/id before the stub, it looks a lot better than it did. Also, I put permanent redirects for any old references to the canonical url. It took a bit of work, now some more updates are going in to make the title/description/h1's more friendly. It's more maintainable now, and some very old URLs are still supported.
Currently working on some other modernization bits, which means a lot of the cruft can finally get cleaned out (if a section at a time, slowly).. having a friendly/consistent url stucture is important imho.
You can't have multiple URL leading to the same information because of SEO.
Well, you can (with canonical meta and stuff), but it's not an ideal practice.
It is often better to cater to robots rather than to humans. A sad truth, highlighting Google's failures.
This is why I fundamentally disagree with the 'friendly' URLs approach. URLs are meant to be unique and permanent resources. Injecting your information architecture or page title etc into this is a horribly leaky abstraction - and those things are meant to be able to change. Mixing them up either adds brittleness to them or means that your URLs are going to require constant grooming to keep in sync.
However, this raises up an important consequence of clean URL design: when you're offering things that may be classified in several categories, it requires good design on the backed/framework to make sure your URL taxonomy isn't overly constricting. For example, example.com/toys/Nintendo-wii or example.com/consoles/Nintendo-wii?
Either one is legit but creating and keeping consistent taxonomy is difficult enough on its own without worrying simultaneously what the URL looks like