Komprise Replications

Designing the monitoring experience
When
October - December 2020
Responsibilities
UX design, UI design, Task flows, Protoyping
Collaborators
CEO, Product Manager, FE engineers, BE engineers
Better monitoring, easier management
In response to a partner's request, Komprise undertook the development of an asynchronous replication tool that would allow IT professionals to safeguard company data by replicating it to an alternative location. To secure the deal, we committed to delivering a seamless end-to-end experience that would align with the launch of their new products.
My contribution
As the lead designer, I took charge of defining and designing the Replications experience in close collaboration with the lead product manager and engineering team. A significant aspect of my role involved prioritizing the design of the monitoring table on the main page, a critical feature within the product.
Replications thumbnail
Scope and constraints
Komprise needed to release the beta by the end of 2020, which left us 2 months to design, build, and conduct QA testing. The timeline was more aggressive than our normal one and left little room for user research, ideation, and iteration.

I strategically prioritized defining the core screens and workflows of the Replications platform, while leveraging existing design components to streamline front-end work.
User insights
Due to their familiarity with our customers and technical understanding of the product requirements, I first met with the Product Manager and CEO to get better insights into our users. Because we didn't have time to conduct user research, I relied on them heavily to define the user needs that would guide this process. Through our discussion we concluded that:
  1. Users wanted a quick way to monitor which tasks weren't completing successfully and breaching SLA's
  2. Users needed to be able to identify and find their replication tasks when creating a high volume of tasks
Defining
Once I had a better idea of our user goals and needs, I worked with the PM to lay out the framework for the new product. We made a rough sitemap of the screens to serve as a design and prioritization guide. I outlined the basic components, while relying on the PM to fill in the technical details.

Based on our work, we came to the consensus that the monitoring table, since it would also serve as the landing page, would be our first feature to design and handoff.
Diving into high-fidelity
Although I would normally begin with lower fidelity wireframes, I recognized that I needed to start with high fidelity designs and accurate mock data. This was to get an instant, actionable feedback from my product team and get accurate technical feedback from our FE and BE teams on what would be feasible to build within our limited timeframe.

I began with the components we already had in our design system, while closely following the user insights and sitemap we had captured. While fleshing out the design, I tried to keep the following in mind:
  • What information was important to for the user to be alerted to visually?
  • How might we allow users full control and flexibility with managing their tasks?
I added visual anchors to breakup the amount of text in the table, including status icons to the far left and progress bars for scan-ability

The progress bars in the middle of the table would also be color-coded for further visual alert in instances of error-related pauses or stops during a replication.
    Search and filtering was also an important consideration since we expected users to add a high volume of replication tasks.

    In addition to comprehensive sort and filtering, we identified 2 uses cases:
    1. Monitoring: users need to locate a single or multiple replication tasks
    2. Discovery: users need to find problematic replications upon opening the table
    For monitoring, we added 2 "task identifier" columns that were searchable - one that listed the source of the replication, and one with the customizable task name.

    For discovery, I connected the "data callouts" above the table, to the table filters. Clicking on these callouts would activate the related filters and allow users to zero in on tasks of interest.
    Integrating feedback
    After presenting these initial designs to stakeholders, I received feedback that the table was overwhelming and appeared to have superfluous information. The consensus was that it was difficult to pinpoint useful metrics within the table and understand which tasks had successful backup copies.

    I set up a meeting with the CEO and BE engineers to better understand the use cases, and what the minimum amount of information needed on the main page was.

    Together, we realized that we had missed these key user insights in our earlier meetings:
    Updates
    Backup status and information
    Based on our new objectives, I wanted there to be an easy way for users to tell if their last backup had succeeded or failed.

    Unfortunately, the two columns that communicated this - "Last successful run" and "Last completed run" -were verbose and unclear in meaning.
    I noticed that we could reduce the amount of text in the table by including insights that told users directly what they wanted to know, vs task metrics, which largely left users to connect their own dots.

    I rebranded the "Last successful run" column as "Recovery copy" to make the wording more backup oriented.

    "Last completed run" was combined with the "Status" column to minimize the number of places the user had to scan and only surface relevant statuses.
    Table links
    Now that the table was refined, it was important that users had a quick link from data in the table to its details.

    We pinpointed 3 columns that might trigger navigation to the details page.
    Details related to these columns were then organized into accordions in the details page. Now, clicking on links in different columns would directed users to preconfigured views of the details page, enabled by opening and closing of the accordions.
    Layout
    Once the content was finalized, I added more padding between columns and increased the table margins to make the visual experience less overwhelming.
    View more screens
    Outcomes
    Once this first iteration of the monitoring table was completed, I quickly moved onto designing the other features of this product at a similar breakneck speed. The main page underwent 2 more minor iterations through work with the front end, back end, and QA teams before the beta was released on Dec 5th. Through our combined efforts in a period of 2 months, Komprise was able to land and secure a $4 million deal with our partner and greatly increase our customer base.

    Future plans for this feature included adapting this design to fit a wider range of data types. While we received positive feedback from initial customers, my plan was to eventually run usability tests on these screens to better understand how users interact with the table to monitor their replication tasks and access their backup copies.
    Back to the top
    ChartHop Configurable Approvals
    eWhiteboard thumbnail image
    eWhiteboard Gamification Platform
    ChartHop Sync History