Introducing the right features, the right way
Our researchers spend a lot of time figuring out the right features to add to our platforms. They partner with our designers to design these features in a way that will be the most beneficial and useable to our users. But no matter how much time is spent in this process, we can’t really know how the feature will be received until we release it. That is why we started using Split.io. Split.io allows us to introduce new features safely and experiment with designs to find the one that works the best.
How it works:
Split.io allows us to integrate multiple interface designs of each feature seamlessly into our platforms, then allows us to control who sees which design, within a small sample group of users. We can then measure the impact each interface has, determine which design has the best impact, and, with a few clicks in the settings, release the interface to the rest of the users. This process is known as A/B testing.
The way A/B testing works with Split.io is that we implement each design into the platforms, then wrap each in a code snippet that will check the Split.io server for whether or not to display that feature. This allows us to configure who sees which layout and study the impact the layout had. Once we determine which layout works best, we can configure the platform to show the winning layout.
A/B testing allows us to eliminate poor user experiences, experiment with new designs, and fine-tune our features. But Split.io offers more than that. It allows us to safeguard our releases by giving us total control on which features are displayed, to whom, and when. That means that we can deploy new features to Production, without having to worry about them breaking the platform. Because, let’s face it, no matter how much we test in a separate, clean sandbox, we can’t know how the features will behave with real data and real users.
With Split.io, we can wrap our features in a code snippet that pings the Split.io server for instructions on whether to display the feature or not. This process is called feature flagging. In the dashboard, we can configure custom rules and control the traffic allocation for who gets to see the feature. This allows us to test the feature in-house, then expose it to a small sample group. Then when we are sure the feature works, we can roll it out to all users.
Because testing for an application of our size is a large and complex undertaking, we supplement the incredible work of our QA team with automated tests. We use a framework called Cypress that allows us to program tests as a set of instructional UI steps with an expected result, allowing us to ensure that UI interaction behaves correctly. Introducing Split.io to our testing suite raised an interesting challenge. We needed to find a way to test all flows of a feature, for each variant. But the way Split.io works is that it identifies each user via a unique ID and assigns them a variant, guaranteeing that the user is always served the same experience.
Luckily, Split.io includes a feature called localhost mode, which reads from a locally stored object instead of pinging the server to see which variant to serve. This allows developers to develop each variant without needing to go to the dashboard every time and allows us to set the variant and test each flow by evoking localhost mode if our testing suite is running.
As you can see, a lot goes into adding a feature into our applications, and this is just one of the steps involved in the research, design, and implementation of new features. It’s a critical step that affords us more control, experimentation, and refinement over the features that we release, in order for us to deliver the right feature the right way.
As a Front-End Engineer at ZapLabs, Renee focuses on improving consumer applications by adding new features, integrating new technologies, and debugging major issues. When she gets out from behind the screen, she is either in the kitchen whipping up something delicious, or at her beading table creating stunning jewelry pieces.