Sitecore Wildcard Items

In a recent presentation to the DC Sitecore User Group, I was surprised to learn that most of the technical attendees didn’t know about Sitecore Wildcard Items. This hidden gem has been in Sitecore since at least Sitecore 4, and allows you to resolve item data however you like. For those of you with an ASP.NET MVC background, it’s like defining a route parameter at a particular URL segment. It’s an interesting example of what you can do with an httpRequestProcessor in Sitecore’s httpBeginRequestPipeline.

The original implementation was described by John West, former CTO of Sitecore, but his original blog post is lost to the Internet. You can find a discussion of Wildcard Items on page 39 of his book, Professional Sitecore Development.

Here’s how it works: Within the Sitecore content tree, you give an item a name of “*”. This item will act as a wildcard, matching any item at that level that doesn’t already have a sibling whose name explicitly matches that URL segment.

The beauty of this is that you can treat these items as regular Sitecore items — you can set presentation details, add them to workflow, etc. — but you can map the data to other Sitecore items or even an external database. For example, Sitecore uses wildcard items to resolve products within their Sitecore Commerce Reference Storefront implementation.

To implement a Wildcard Item, you’ll need to create two things:

  1. A Sitecore Wildcard Item Resolver
  2. An optional LinkProvider, so you can generate valid URLs for these items and reference them elsewhere in the site.

If you want to take a crack at implementing this yourself, have a look at Gaurav Agarwal’s post on Resolving the Wildcard Item, which has some code snippets to get you started.

There’s also an old Sitecore Wildcard Module, which does essentially the same thing, but uses the Sitecore Rules Engine to resolve the correct item from the Sitecore content tree. See Adam Conn’s post on Wildcards and Data-Driven URLs for details. It’s been available since 2011 with Sitecore 6, but could be modified to work in modern versions of Sitecore with a little work. I found a developer that created a revised wildcard module for Sitecore 7, for example.

Also, when using ASP.NET MVC, keep in mind that sometimes other pipelines might reset the Sitecore context item after your custom wildcard item resolver finishes its work. See Kamruz Jaman’s post on the Sitecore MVC Context Item for help troubleshooting this issue.

Wildcard items are a powerful technique, and can save a lot of hassles — I’ve seen many implementations “break out” of Sitecore to use MVC routing, then hack back in a Sitecore context or session object. But wildcard resolution keeps you within  within the Sitecore stack and are a more natural approach to this problem. You can learn a lot about how Sitecore’s request handling pipelines operate by studying how it works and applying it in your own solutions.

Sitecore Commerce Catalogs at Scale

Last week, I gave a presentation to the DC Sitecore User Group on Sitecore Commerce Catalogs. It was a small crowd due to some thunderstorms in the area, and I had a tough act to follow. Phil Wickland, Sitecore MVP and author of several books on Sitecore, gave a talk on Personalization for Impact, which is worth seeing.

My talk was about how and why Sitecore imports catalog data from a PIM, using the Sitecore Commerce and Microsoft D365 integration as an example.

Here’s a link to the video:

The audio is a bit hard to hear at times, but I’ve posted my slides to slideshare here:

Blue Green Deployments

I’m surprised that in 2017 more developers and IT departments haven’t heard about blue-green deployment. But apparently that’s been a problem since at least 2010, when Martin Fowler noted that blue-green deployment hadn’t gotten the recognition it deserved. This is a DevOps pattern used widely for testing and deploying websites and webservices with minimal downtime in the cloud. Amazon Web Services now has a Code Deploy template for blue-green deployments. Microsoft lists the blue-green deployment technique first in its Azure Continuous Deployment whitepaper. It’s safe to say that blue-green deployments have graduated from an edge technique to current state-of-the-art.

The idea behind blue-green deployment strategy is simple. When you want to test a new build of your website, you create a copy of your “blue” production instance in a stand-by or “green” environment. Then you deploy the latest code to the new green environment and perform a battery of tests on it. Once the green environment passes your load, performance, security and other tests, your direct traffic from the live “blue” infrastructure to the “green” instead.

You can leave the original blue infrastructure in place temporarily, just in case you want to roll back to the previous build. You could also decommission the blue infrastructure to save operating costs. When it is time for the next release, you make a new blue environment, deploy the release to it, and perform the tests again. If the release passes, you cut over from the currently live green environment to blue.

If releases are frequent enough or if you are doing this with physical servers, you can skip the decommissioning step, and simply alternate releases between the two environments. As long as you make sure to “reset” the standby environment back to a known baseline before your deploy your latest release to it.

There are three huge advantages to performing releases this way:

  1. Your standby environment is a copy of production and fully scaled. You can run full acceptance, performance and load tests on this environment and be guaranteed that it will work the same in production — because it will be production, as soon as you direct traffic to it.
  2. Cutover to a new build is almost instantaneous, since all you need to do is redirect traffic. There should be no downtime.
  3. Rolling back to the previous is almost instantaneous, since all you need to do is redirect traffic to the previous environment. Disaster recovery is easy and your disaster recovery process is tested with every release.

Of course, there are some challenges to this approach:

  1. You do have to have a fully scaled standby environment, which will increase hardware and software costs. You can minimize these in a cloud setting, however, because that standby environment can be decommissioned when it isn’t needed.
  2. You have to invest in DevOps. If you are building and tearing down environments on a regular basis, you’ll need automated scripts for provisioning infrastructure, deploying code, and making configuration changes.
  3. Your platform should be as stateless as possible. When a cutover happens, a user’s next request will be handled by the new environment. Any state that must be maintained from one request to the next must be shared by both environments.
  4. Handling schema updates for databases must be handled differently from regular code deployments.

Of the challenges, the most resistance to the idea usually comes with the hardware and software costs, usually because companies haven’t embraced consumption pricing yet. With physical hardware or perpetual software licensing, you do wind up paying double for your production setup. But for critical applications it’s probably worth the expense.

Schema updates remain a difficult engineering challenge too, but they always have been, regardless of the deployment model.

Here’s a few other links on the blue-green deployment pattern, if you’d like to learn more:

 

Learning Software Architecture

Like many web developers, I learned the art of writing code by trial and error, rather than through formal instruction. I’d look at examples on the Internet, read forum and blog posts, and occasionally consult a book or two. This worked surprisingly well — I’ve made a career out of it, and gained a lot of knowhow through hard-won experience.

You can’t learn everything by trial and error, though. Sometimes the experiments take too long to run, or you can’t afford to have the results blow up in your face. Although I’d figured out basic software architecture principles by observing what worked about past projects and what didn’t, I wasn’t very systematic about it. But hey, I could scribble a boxes-and-arrows diagram on a whiteboard and talk folks through it, and that was enough for most web applications.

My current employer, EPAM, takes software architecture seriously. I found this out when I was invited to serve on an assessment panel for Solution Architects seeking promotion, and I developed a bad case of impostor syndrome. It turns out that there are names for the different kinds of diagrams I’d been doodling, and methods for determining which customer requirements were architecturally significant. You could subject these ideas to analysis and evaluate tradeoffs in a systematic way instead of, you know, by gut feel.

spidey-sense-lg.jpg

My spidey-sense for software project disasters is pretty well attuned by this stage in my career, so I think I was able to ask reasonable questions of our crop of upcoming architects. But I could tell that several of our candidates were winging it, and I wondered if it was obvious that I was groping my way along as well.

So I decided to do something about it. After a bit of Googling, I found out about TOGAF and the SEI, and the fact that there’s an international standard for software architecture, ISO/IEC 42010. Neat. And then I found the book Software Architecture in Practice.

SoftwareArchitectureInPractice_cover.jpg
Software Architecture in Practice

I’ve spent the last few weeks reading it through, chapter by chapter. As textbooks go, it’s well-written. It’s also (as befits a book about architecture) very well-structured, so it’s something I plan on using as a handy reference when I want to know about tactics for improving performance, or about tradeoffs between usability and security.

It’s also interesting to see a bit of what software architecture looks like in other domains. As a web developer, I don’t often need to think about the constraints imposed on real-time or embedded systems. But I do think a lot about scalability, availability, and some of the other quality attributes described in the book. It’s nice to have these cataloged, with approaches for how to address these needs and what the implications are.

I agree with the authors that the term “quality attributes” is much better than the more commonly used “non-functional requirements.” Most people seem to assume that the non-functional requirements are non-interesting and non-important, when in fact they mark the difference between unicorn and fail whale.

Later chapters address documenting and evaluating software architecture, and software architecture’s relationship to modern software development methodologies, like the various flavors of Agile.

I was really glad to have found a book that captured so much of what I’d learned in the school of hard knocks, and that offered a few new ideas and tools for me to apply to future projects. Naturally, I loaned it out to one of my coworkers and immediately recommended it to another Solution Architect I’d been mentoring at EPAM.

“Oh yes,” he said, “I know that book. It’s on the curriculum for EPAM’s Software Architecture University. That’s a good one.”

On the curriculum…? I guess I still have a way to go before I can ditch that impostor syndrome.

Hello World

computer bits

It’s taken me quite a while to set up a professional blog, despite the fact that creating one is extremely easy these days. It isn’t the effort, but the motivation.

There are two types of programmers in the world: starters and finishers. Your starters can’t wait for the next project, the next technology or the next brilliant idea to come along. That drive toward novelty makes them ideal designers, prototypers, and R&D staff. But please don’t bother them with details, edge cases, or error conditions. Don’t ask them to maintain a piece of code, or tune an operational system. There are always a million more interesting things to pursue!

I’m a finisher. That means the blinking prompt or the blank page terrifies me. Where a starter sees endless greenfields and boundless opportunities, I see a beautiful, elegant white starkness about to be marred by my amateur scribblings.

So I didn’t want to start a new blog. What I really wanted to do was rebuild my old blog. The one from my ill-fated attempt to start a software company during the Global Financial Crisis. There are good articles there, posts I’m proud of writing. I’m sure I could salvage those, update them, refactor them, improve them… yes, I’m a finisher all right.

But sometimes you do need to start over, let go of the baggage, and begin again.

Hello, world.