Problem

Problem

Solve

Solve

Solve the Right Problem

Solve the Right Problem

Solve the Right Problem

Solve the Right Problem

DASHBOARD REDESIGN

“If I had asked people what they wanted, they would have said faster horses.”

“If I had asked people what they wanted, they would have said faster horses.”

“If I had asked people what they wanted, they would have said faster horses.”

01.1

Introduction

-01-

This quote, often attributed to Henry Ford (though its authenticity is debated), highlights how people tend to frame their desires within the limits of what they already know. Had Ford relied solely on customer feedback, he might have been asked for faster or more efficient horses rather than an entirely new mode of transportation like vehicle.

As designers, our role goes beyond addressing surface-level requests—we uncover the deeper needs behind them.


This project began as a redesign of a Web Dashboard I had previously worked on. While the dashboard was functional, there was room for improvement in clarity and usability. As I revisited the design and product requirements, I saw an opportunity not just to refine the interface but to rethink the problem itself.

-02-

01.1

Introduction

01.1

Introduction

The starting point

The original dashboard used a three-stage funnel—Pre-Malicious, Malicious, and Post-Malicious—to show lifecycle volumes. While functional, it lacked clarity on movement between stages and failed to connect findings back to their sources. The result was a static, disconnected view that left users questioning accuracy and completeness.

Redesign goals

Highlighting the breadth and depth of monitoring coverage

  • Connecting data sources directly to lifecycle stages

  • Visualizing lifecycle transitions in a way that supports decision-making

  • Communicating operational progress alongside risk monitoring

  • Keeping the most relevant information within a single viewport

-03-

01.1

Introduction

The starting point

The original dashboard used a three-stage funnel—Pre-Malicious, Malicious, and Post-Malicious—to show lifecycle volumes. While functional, it lacked clarity on movement between stages and failed to connect findings back to their sources. The result was a static, disconnected view that left users questioning accuracy and completeness.

Redesign goals

Highlighting the breadth and depth of monitoring coverage

  • Connecting data sources directly to lifecycle stages

  • Visualizing lifecycle transitions in a way that supports decision-making

  • Communicating operational progress alongside risk monitoring

  • Keeping the most relevant information within a single viewport

-03-

-04-

Concept

Concept

02.1

Early Concept

Bubble Graph Design Exploration 1.0

Bubble Graph Design Exploration 1.0

Bubble Graph Design Exploration 2.1

Bubble Graph Design Exploration 2.2

Early Exploration and Limitations

Early Exploration and Limitations

Initial iterations explored a bubble chart for consistency with reporting visuals, followed by a simplified bar graph to map data feeds to funnel stages. While the bar graph improved traceability, it introduced several issues:

  • Excessive use of purple tones, making differentiation difficult

  • Crowding of the funnel area and reduced visual hierarchy

  • Limited affordance for interaction and detail exploration

  • Long numeric values that were difficult to scan quickly

Initial iterations explored a bubble chart for consistency with reporting visuals, followed by a simplified bar graph to map data feeds to funnel stages. While the bar graph improved traceability, it introduced several issues:

  • Excessive use of purple tones, making differentiation difficult

  • Crowding of the funnel area and reduced visual hierarchy

  • Limited affordance for interaction and detail exploration

  • Long numeric values that were difficult to scan quickly

02.2

Design Process

-05-

02.2

Design Process

-06-

-07-

Design Audit

Design Audit

02.3

Design Audit

-08-

Design Explorations

Design Explorations

02.4

Design Explorations

-09-

02.5

Redefine the Problem

Rethink the Problem

Rethink the Problem

User needs analysis revealed that “seeing the lifecycle” in full detail was not the core requirement. At a micro level, a single domain’s lifecycle is easy to follow; at scale, the complexity obscures the signal. The priority was reframed around two primary questions:

  1. Risk movement: How much low risk became high risk within the period, and how much clean data re-entered a risky state

  2. Operational effectiveness: How efficiently high-risk items were taken down


This reframing shifted the design from lifecycle completeness toward actionable insight.

User needs analysis revealed that “seeing the lifecycle” in full detail was not the core requirement. At a micro level, a single domain’s lifecycle is easy to follow; at scale, the complexity obscures the signal. The priority was reframed around two primary questions:

  1. Risk movement: How much low risk became high risk within the period, and how much clean data re-entered a risky state

  2. Operational effectiveness: How efficiently high-risk items were taken down


This reframing shifted the design from lifecycle completeness toward actionable insight.

User needs analysis revealed that “seeing the lifecycle” in full detail was not the core requirement. At a micro level, a single domain’s lifecycle is easy to follow; at scale, the complexity obscures the signal. The priority was reframed around two primary questions:

  1. Risk movement: How much low risk became high risk within the period, and how much clean data re-entered a risky state

  2. Operational effectiveness: How efficiently high-risk items were taken down


This reframing shifted the design from lifecycle completeness toward actionable insight.

User needs analysis revealed that “seeing the lifecycle” in full detail was not the core requirement. At a micro level, a single domain’s lifecycle is easy to follow; at scale, the complexity obscures the signal. The priority was reframed around two primary questions:

  1. Risk movement: How much low risk became high risk within the period, and how much clean data re-entered a risky state

  2. Operational effectiveness: How efficiently high-risk items were taken down


This reframing shifted the design from lifecycle completeness toward actionable insight.

Was the funnel truly the best way to represent the lifecycle, or was it simply the solution we defaulted to?

This prompted me to dig deeper into what users genuinely needed from this dashboard.

If we examine the lifecycle of a single domain, its progression is relatively straightforward:

  • A domain might start as Pre-Malicious (low risk), escalate to Malicious (high risk, requiring action), and eventually move to Post-Malicious (clean) when we takedown it.

  • Over time, the same domain might cycle back to Pre-Malicious or Malicious if flagged again.


At this micro-level, the lifecycle makes sense. However, when summarizing these transitions for thousands of domains at a macro level, things become much more complex:


  • In the Pre-Malicious phase, some data originates from newly found sources, while other data may have previously been clean but has now re-entered the low-risk category.

  • From Pre-Malicious, data can either escalate to Malicious or return to Post-Malicious.

  • These transitions occur at different times and involve intricate flows.


My initial attempt to represent these movements using a timeline helped visualize transitions over a specific period. However, I started questioning whether users needed this level of detail.

This prompted me to dig deeper into what users genuinely needed from this dashboard.

If we examine the lifecycle of a single domain, its progression is relatively straightforward:

  • A domain might start as Pre-Malicious (low risk), escalate to Malicious (high risk, requiring action), and eventually move to Post-Malicious (clean) when we takedown it.

  • Over time, the same domain might cycle back to Pre-Malicious or Malicious if flagged again.


At this micro-level, the lifecycle makes sense. However, when summarizing these transitions for thousands of domains at a macro level, things become much more complex:


  • In the Pre-Malicious phase, some data originates from newly found sources, while other data may have previously been clean but has now re-entered the low-risk category.

  • From Pre-Malicious, data can either escalate to Malicious or return to Post-Malicious.

  • These transitions occur at different times and involve intricate flows.


My initial attempt to represent these movements using a timeline helped visualize transitions over a specific period. However, I started questioning whether users needed this level of detail.

This prompted me to dig deeper into what users genuinely needed from this dashboard.

If we examine the lifecycle of a single domain, its progression is relatively straightforward:

  • A domain might start as Pre-Malicious (low risk), escalate to Malicious (high risk, requiring action), and eventually move to Post-Malicious (clean) when we takedown it.

  • Over time, the same domain might cycle back to Pre-Malicious or Malicious if flagged again.


At this micro-level, the lifecycle makes sense. However, when summarizing these transitions for thousands of domains at a macro level, things become much more complex:


  • In the Pre-Malicious phase, some data originates from newly found sources, while other data may have previously been clean but has now re-entered the low-risk category.

  • From Pre-Malicious, data can either escalate to Malicious or return to Post-Malicious.

  • These transitions occur at different times and involve intricate flows.


My initial attempt to represent these movements using a timeline helped visualize transitions over a specific period. However, I started questioning whether users needed this level of detail.

This prompted me to dig deeper into what users genuinely needed from this dashboard.

If we examine the lifecycle of a single domain, its progression is relatively straightforward:

  • A domain might start as Pre-Malicious (low risk), escalate to Malicious (high risk, requiring action), and eventually move to Post-Malicious (clean) when we takedown it.

  • Over time, the same domain might cycle back to Pre-Malicious or Malicious if flagged again.


At this micro-level, the lifecycle makes sense. However, when summarizing these transitions for thousands of domains at a macro level, things become much more complex:


  • In the Pre-Malicious phase, some data originates from newly found sources, while other data may have previously been clean but has now re-entered the low-risk category.

  • From Pre-Malicious, data can either escalate to Malicious or return to Post-Malicious.

  • These transitions occur at different times and involve intricate flows.


My initial attempt to represent these movements using a timeline helped visualize transitions over a specific period. However, I started questioning whether users needed this level of detail.

-10-

Realizing the True User Needs

When users asked to see the lifecycle, I began to ask myself:

  • Why do they need to see lifecycle transitions?

  • Do they care about every granular movement, or is there a higher-level insight they’re looking for?


Upon reflection, I realized that what users truly needed boiled down to two key areas:

  1. Risk Monitoring

  • How much low-risk data transitioned to high-risk during a given period?

  • How much clean data became risky again?

  1. Workflow Progress

  • How effectively is Bolster addressing threats?

  • How much high-risk data has been successfully taken down?


These questions focus on the outcomes users care about, rather than the granular transitions themselves.

When users asked to see the lifecycle, I began to ask myself:

  • Why do they need to see lifecycle transitions?

  • Do they care about every granular movement, or is there a higher-level insight they’re looking for?


Upon reflection, I realized that what users truly needed boiled down to two key areas:

  1. Risk Monitoring

  • How much low-risk data transitioned to high-risk during a given period?

  • How much clean data became risky again?

  1. Workflow Progress

  • How effectively is Bolster addressing threats?

  • How much high-risk data has been successfully taken down?


These questions focus on the outcomes users care about, rather than the granular transitions themselves.

When users asked to see the lifecycle, I began to ask myself:

  • Why do they need to see lifecycle transitions?

  • Do they care about every granular movement, or is there a higher-level insight they’re looking for?


Upon reflection, I realized that what users truly needed boiled down to two key areas:

  1. Risk Monitoring

  • How much low-risk data transitioned to high-risk during a given period?

  • How much clean data became risky again?

  1. Workflow Progress

  • How effectively is Bolster addressing threats?

  • How much high-risk data has been successfully taken down?


These questions focus on the outcomes users care about, rather than the granular transitions themselves.

Redefine the Problem

This realization shifted my perspective. Instead of trying to fit all lifecycle details into a funnel, I reframed the problem:


The goal was not just to show the lifecycle but to provide insights into risk trends and workflow effectiveness in a way that users could easily understand.


This redefinition allowed me to step back from the funnel as the sole solution and explore other ways to meet these core needs.

02.5

Redefine the Problem

-10-

Realizing the True User Needs

When users asked to see the lifecycle, I began to ask myself:

  • Why do they need to see lifecycle transitions?

  • Do they care about every granular movement, or is there a higher-level insight they’re looking for?


Upon reflection, I realized that what users truly needed boiled down to two key areas:

  1. Risk Monitoring

  • How much low-risk data transitioned to high-risk during a given period?

  • How much clean data became risky again?

  1. Workflow Progress

  • How effectively is Bolster addressing threats?

  • How much high-risk data has been successfully taken down?


These questions focus on the outcomes users care about, rather than the granular transitions themselves.

Redefine the Problem

This realization shifted my perspective. Instead of trying to fit all lifecycle details into a funnel, I reframed the problem:


The goal was not just to show the lifecycle but to provide insights into risk trends and workflow effectiveness in a way that users could easily understand.


This redefinition allowed me to step back from the funnel as the sole solution and explore other ways to meet these core needs.

02.5

Redefine the Problem

-11-

Final Design

Final Design

03.1

Design Showcase

-12-

Design Analysis

Design Analysis

03.2

Design Analysis

03.3

Impact

-13-

The redesigned dashboard now answers the core questions of risk evolution and threat mitigation progress. By aligning the visual hierarchy with user priorities, the design connects sources, lifecycle stages, and operational outcomes in a single, coherent view. The process underscored a critical UX principle: validate the real problem before refining the visualization.

Go Back

Solve the Right Problem

Go Back

Solve the Right Problem

Go Back

Solve the Right Problem