Sitecore and StyleLabs

The big announcement at Sitecore Symposium this year was Sitecore’s acquisition of StyleLabs, a marketing technology company that provides several software-as-a-service tools:

  • Digital Asset Management (DAM)
  • Marketing Resource Management (MRM)
  • Product Information Management (PIM)
  • Digital Rights Management (DRM)

It wouldn’t be enterprise software without all the acronyms, would it?

Sitecore aquires StyleLabs
Sitecore CEO Mark Frost talks with StyleLabs’ co-founders at Sitecore Symposium 2018

This announcement was a big deal for Sitecore, because its current tool for handling media assets — the Sitecore Media Library — was increasingly showing its age. The structure and features of the Media Library haven’t changed much in 10 years. It’s one of the first things that partners needed to extend or replace for any company that was serious about managing digital media. And it had nothing to offer in terms of workflow tools apart from the simple “publishing approval” style of workflow used for web pages.

There were a few go-to solutions for DAM integrations — both Digizuite and ADAM Software (now part of Aprimo) have decent integrations with software. For any of the other capabilities listed above, you most likely would need to build your own solution.

So customers and partners were really pleased that Sitecore will have something first-party to offer in these areas. If the schedule holds, StyleLabs integration as a plug-in within by the end of 2018, with a full integration planned in the first half of 2019.

More than a product line extension?

While addressing a gap in its offering is important — especially since Sitecore often goes head-to-head with Adobe Experience Manager in big deals — I’m more excited about the long-term potential.

This acquisition brings some new technologies, new approaches, and new team members into the Sitecore family.

StyleLabs was founded in 2011, and its cloud-hosted, SaaS offerings reflect a later technology generation than Sitecore,  whose roots go back before the turn of the millennium. (It says a lot about the strength of Sitecore’s fundamentals that it has lasted this long!) As Sitecore modernizes its architecture stack, having the StyleLabs’ expertise in SaaS technologies and business model will be invaluable.

While Sitecore was always great about storing and managing content and digital assets, in really didn’t address the production of those assets in the first place. StyleLabs understands the creative workflow that goes into making these digital artifacts, and I hope to see their knowledge reflected in future versions of the Sitecore platform.

I also expect a strong of cross-pollination of ideas. Sitecore and StyleLabs execute similar concepts in different ways. For example, Sitecore structures information in a hierarchy or tree, which works extremely well for managing URLs in a website. StyleLabs takes a relational approach to content, which offers more flexibility but perhaps less consistency.

So there are a lot of cues — from software architecture to user interface — that these teams can take from each other. While filling gaps in the product line may boost the short-term, I’m really curious what this means for Sitecore over the long term.

 

 

 

SCpbMD Catalog Sync Explained

I’ve been meaning to post this diagram for a while. I’ve used this to explain the Sitecore Commerce catalog data sync operation to at least three different clients since I drafted it this summer. And although I created it for the Microsoft Dynamics 365 version of the connector, it’s similar for Dynamics AX and other PIM systems as well.

SCpbMD_DataSync_2017-11-27.png

In the D365 box, you have the UI application, which admins can use to publish catalog data once it has been validated. There are a lot of elements that must be configured an working correctly to have a valid catalog, but a few of the key pieces are:

  • An online navigation hierarchy for your online channel
  • An assortment for your online channel containing released products
  • A catalog associated with the online channel
  • Products assigned to nodes on the online navigation hierarchy
  • Product attributes defined and attached to nodes on your online navigation hierarchy

Once the catalog is published, it’s ready to go as far as the “headquarters” database is concerned. But in Dynamics AX / D365, it also has to be distributed — sent to the online channel database using distribution jobs.

Once the catalog is in the online channel database, the D365 Retail Server can read from it. External applications can read catalog data from the online channel database using the Retail Server APIs, a curious mix of web services that aren’t quite WCF and aren’t quite REST.

This is where the Sitecore part of the picture comes into play. Sitecore provides a sample console application that uses Sitecore’s Data Exchange Framework to fetch data from the Dynamics Retail server. It transforms it into an XML file that can then be imported into Sitecore’s Commerce Server. (Sitecore 9 also uses this catalog.xml file format, though the old commerce server components are no longer used.)

This places the product and category definitions and data into the product catalog database. This product catalog database acts as an “edge cache” that keeps just the products the site will use close to the infrastructure of the website itself. It provides some redundancy in case that communications problems occur between Sitecore and D365.

The last step in the process is the catalog data provider. Sitecore XP uses a data provider to access the product catalog database, creating virtual Sitecore items that appear within the Sitecore UI. Product and category data are not stored as “real Sitecore items” in the sense that they live in the standard Sitecore master or web databases.

Watch the arrows!

Note the color and direction of the arrows in the diagram above. The orange arrows are the ones controlled by the Sitecore console app and Data Exchange Framework. The arrows in blue are either part of standard D365 functionality or belong to extensions to the Sitecore platform. The orange arrows could have been labeled “Extract, Transform, and Load” because that’s exactly the operations performed by the catalog sync. (If I ever redraw the diagram, I might update the labels to say just that!)

The direction of the arrows are important, too. Catalog information must be sent from AX HQ to the channel database by those batch distribution jobs. If those jobs aren’t running, then no updates occur in the channel DB, and no updates will be returned in by the Retail Server API.

Once the data is available at the Retail Server, it’s up to the Sitecore catalog sync process to fetch the latest data from the Retail Server. This can be run manually or as a scheduled job, but note that there are no notifications here — D365 doesn’t push the data to Sitecore, Sitecore pulls the data when it needs to.

A sequence of batches

This is definitely not a real-time process. As you might imaging from the number of batches, pushes, and pulls shown in the diagram, it can take a significant amount of time to move an update — like a adding a new product to the catalog or setting up a new product attribute for a category — from D365 HQ into Sitecore XP. If every batch job involved executed on a 15 minute timer, it could take 45-60 minutes for that product to appear on the site. The interval could be longer depending on the size of the catalog and the number of changes made.

There’s more than one way to do it

Although Sitecore provides the catalog sync code as part of its commerce connectors, it’s really just an example or starter kit for us to use. In practice, you’ll need to modify the logic used to generate the catalog.xml file to import into Sitecore. You may also need to move the data sync process to other servers for scalability or performance reasons. Or you could replace Sitecore’s Data Exchange Framework with another ETL framework or a business process orchestration suite like BizTalk.

The connector is just a starting point for implementation, and hopefully the diagram and my explanation of it makes a good starting point for discussion with your team or client about how the process of syncing catalogs might work.

Blue Green Deployments

I’m surprised that in 2017 more developers and IT departments haven’t heard about blue-green deployment. But apparently that’s been a problem since at least 2010, when Martin Fowler noted that blue-green deployment hadn’t gotten the recognition it deserved. This is a DevOps pattern used widely for testing and deploying websites and webservices with minimal downtime in the cloud. Amazon Web Services now has a Code Deploy template for blue-green deployments. Microsoft lists the blue-green deployment technique first in its Azure Continuous Deployment whitepaper. It’s safe to say that blue-green deployments have graduated from an edge technique to current state-of-the-art.

The idea behind blue-green deployment strategy is simple. When you want to test a new build of your website, you create a copy of your “blue” production instance in a stand-by or “green” environment. Then you deploy the latest code to the new green environment and perform a battery of tests on it. Once the green environment passes your load, performance, security and other tests, your direct traffic from the live “blue” infrastructure to the “green” instead.

You can leave the original blue infrastructure in place temporarily, just in case you want to roll back to the previous build. You could also decommission the blue infrastructure to save operating costs. When it is time for the next release, you make a new blue environment, deploy the release to it, and perform the tests again. If the release passes, you cut over from the currently live green environment to blue.

If releases are frequent enough or if you are doing this with physical servers, you can skip the decommissioning step, and simply alternate releases between the two environments. As long as you make sure to “reset” the standby environment back to a known baseline before your deploy your latest release to it.

There are three huge advantages to performing releases this way:

  1. Your standby environment is a copy of production and fully scaled. You can run full acceptance, performance and load tests on this environment and be guaranteed that it will work the same in production — because it will be production, as soon as you direct traffic to it.
  2. Cutover to a new build is almost instantaneous, since all you need to do is redirect traffic. There should be no downtime.
  3. Rolling back to the previous is almost instantaneous, since all you need to do is redirect traffic to the previous environment. Disaster recovery is easy and your disaster recovery process is tested with every release.

Of course, there are some challenges to this approach:

  1. You do have to have a fully scaled standby environment, which will increase hardware and software costs. You can minimize these in a cloud setting, however, because that standby environment can be decommissioned when it isn’t needed.
  2. You have to invest in DevOps. If you are building and tearing down environments on a regular basis, you’ll need automated scripts for provisioning infrastructure, deploying code, and making configuration changes.
  3. Your platform should be as stateless as possible. When a cutover happens, a user’s next request will be handled by the new environment. Any state that must be maintained from one request to the next must be shared by both environments.
  4. Handling schema updates for databases must be handled differently from regular code deployments.

Of the challenges, the most resistance to the idea usually comes with the hardware and software costs, usually because companies haven’t embraced consumption pricing yet. With physical hardware or perpetual software licensing, you do wind up paying double for your production setup. But for critical applications it’s probably worth the expense.

Schema updates remain a difficult engineering challenge too, but they always have been, regardless of the deployment model.

Here’s a few other links on the blue-green deployment pattern, if you’d like to learn more:

 

Learning Software Architecture

Like many web developers, I learned the art of writing code by trial and error, rather than through formal instruction. I’d look at examples on the Internet, read forum and blog posts, and occasionally consult a book or two. This worked surprisingly well — I’ve made a career out of it, and gained a lot of knowhow through hard-won experience.

You can’t learn everything by trial and error, though. Sometimes the experiments take too long to run, or you can’t afford to have the results blow up in your face. Although I’d figured out basic software architecture principles by observing what worked about past projects and what didn’t, I wasn’t very systematic about it. But hey, I could scribble a boxes-and-arrows diagram on a whiteboard and talk folks through it, and that was enough for most web applications.

My current employer, EPAM, takes software architecture seriously. I found this out when I was invited to serve on an assessment panel for Solution Architects seeking promotion, and I developed a bad case of impostor syndrome. It turns out that there are names for the different kinds of diagrams I’d been doodling, and methods for determining which customer requirements were architecturally significant. You could subject these ideas to analysis and evaluate tradeoffs in a systematic way instead of, you know, by gut feel.

spidey-sense-lg.jpg

My spidey-sense for software project disasters is pretty well attuned by this stage in my career, so I think I was able to ask reasonable questions of our crop of upcoming architects. But I could tell that several of our candidates were winging it, and I wondered if it was obvious that I was groping my way along as well.

So I decided to do something about it. After a bit of Googling, I found out about TOGAF and the SEI, and the fact that there’s an international standard for software architecture, ISO/IEC 42010. Neat. And then I found the book Software Architecture in Practice.

SoftwareArchitectureInPractice_cover.jpg
Software Architecture in Practice

I’ve spent the last few weeks reading it through, chapter by chapter. As textbooks go, it’s well-written. It’s also (as befits a book about architecture) very well-structured, so it’s something I plan on using as a handy reference when I want to know about tactics for improving performance, or about tradeoffs between usability and security.

It’s also interesting to see a bit of what software architecture looks like in other domains. As a web developer, I don’t often need to think about the constraints imposed on real-time or embedded systems. But I do think a lot about scalability, availability, and some of the other quality attributes described in the book. It’s nice to have these cataloged, with approaches for how to address these needs and what the implications are.

I agree with the authors that the term “quality attributes” is much better than the more commonly used “non-functional requirements.” Most people seem to assume that the non-functional requirements are non-interesting and non-important, when in fact they mark the difference between unicorn and fail whale.

Later chapters address documenting and evaluating software architecture, and software architecture’s relationship to modern software development methodologies, like the various flavors of Agile.

I was really glad to have found a book that captured so much of what I’d learned in the school of hard knocks, and that offered a few new ideas and tools for me to apply to future projects. Naturally, I loaned it out to one of my coworkers and immediately recommended it to another Solution Architect I’d been mentoring at EPAM.

“Oh yes,” he said, “I know that book. It’s on the curriculum for EPAM’s Software Architecture University. That’s a good one.”

On the curriculum…? I guess I still have a way to go before I can ditch that impostor syndrome.

Hello World

computer bits

It’s taken me quite a while to set up a professional blog, despite the fact that creating one is extremely easy these days. It isn’t the effort, but the motivation.

There are two types of programmers in the world: starters and finishers. Your starters can’t wait for the next project, the next technology or the next brilliant idea to come along. That drive toward novelty makes them ideal designers, prototypers, and R&D staff. But please don’t bother them with details, edge cases, or error conditions. Don’t ask them to maintain a piece of code, or tune an operational system. There are always a million more interesting things to pursue!

I’m a finisher. That means the blinking prompt or the blank page terrifies me. Where a starter sees endless greenfields and boundless opportunities, I see a beautiful, elegant white starkness about to be marred by my amateur scribblings.

So I didn’t want to start a new blog. What I really wanted to do was rebuild my old blog. The one from my ill-fated attempt to start a software company during the Global Financial Crisis. There are good articles there, posts I’m proud of writing. I’m sure I could salvage those, update them, refactor them, improve them… yes, I’m a finisher all right.

But sometimes you do need to start over, let go of the baggage, and begin again.

Hello, world.