Live Battery Event Monitoring

    Premise: This page is a live battery discharge events page with links to details such as battery site and individual battery asset information.

    Identifying Issues via Users

    I began this project with little to no familiarity with the product and identified what areas to change from sessions with users and subject matter experts.

    User research formats:

    • In person user interviews with both groups of users

    • Sessions with individual users

    • Sessions with subject matter experts


    additional notes on the overall interface

    Now this was a web of interlaced information and issues that cannot be solved with just one page. (To read about how I knitted all the pages together to tackle the overarching problem, please see my write up on Battery Monitoring Information Architecture)

    It became clear that the recurring pain point was identifying performance.

    Goal: The focus of this dashboard/page is to present top level information, performance, in a way that allows the user to easily dig into more details if needed.

    The current table actually has a “Performance” but it doesn’t show:

    • Performance is measured at every hour of the event

    • The benchmark of performance changes based on the type of hour

    • There are 3 types of performance measures based on different guideline

    Current dashboard (simulated numbers)

    Addressing User Needs: Surfacing Performance Details

    I wanted to retain the structure of the Events console but at the same time find ways to add useful information.

    Additional information tucked in tooltips: I added in an info icon to add more context to information such as the fact that the committed kW of energy varies between hours and what exact metric performance is referencing.


    Adding in other performance metrics: I knew there were 3 total performance metrics but wasn’t sure about the order of importance so I left that to be validated later on. With this in mind, I sketched indicators only for the “most” important performance to not overwhelm the user. (Later on I would find that all the performances are very important and I’d end up adding indicators on all of them)

    New columns of various performance metrics

    Performance indicators: In the current dashboard, the performance is visually indicated by a rectangular underlay and although it serves as a visual indicator severity it doesn’t show progress/volatility.

    A better method would be to use open/closed circles to indicate progress and color to indicate the actual performance. It is not only less intrusive but adds a new layer of information. For example, a red open circle is less severe than a red closed circle because open means it’s dynamic and may change.

    Open/closed circles as performance indicators

    Visualizing progress without graphics: Instead of using “Duration”, I added “hour/hours (x out of total)” to show event progress without sacrificing the actual number, something Operations, who tended to export this information for external analysis, highly emphasized. You can see in my sketch that I played around with progress bars as indicators but quickly realized this served little utility to data heavy users.

    Event status as "x hr out of x hrs"

    Hourly performance: Currently, hourly performance can be accessed via a click through to the event details page. But this was indicated by users as cumbersome because the workflow is: (1) identify low performing events (2) look at what hour the event is in and if there was a specific hour it performed badly (3) figure out why that hour/event did so badly etc. It essentially meant they’d have scrub through all the event details pages to find the information.

    One solution is to create an accordion table, where rows can expand to reveal hourly information. (Indicated by the arrows in the mockup).

    New design system component: accordion table

    This required building a new component in the design system and we decided this is an important asset not only for this page and product but would be very useful for other products in our lineup.

    Validating Assumptions and Feedback

    After several review sessions with the users, some feedback and answers to my assumptions were:

    Is it important to see which event is active? Yes. I can technically identify it with the start date but it would be nice to easily figure out if something already happened, is happening, and about to happen.

    [I added in lower font opacity to the END time of events to indicate that it hasn't finished/been finalized]

    The commitment (which is a baseline for one of the performance metrics) varies based on whether event is in normal or extended hours. Yes, the commitment can change within one event

    [When the row is expanded, commitment is assigned to each hour]

    What level of detail should event status be? We actually don't know when an event would end, we have a rough estimate but we don't know for sure so it's not really possible to say an event is in "hr x of x hrs"

    [I ended up changing my "x hr/ total hrs" back to duration and again, assigned it a lower opacity when the event is active and ongoing because of the uncertainty]

    Once expanded, hours should be displayed with the most recent hours at the top. Yes, it's a flowing timeline so seeing most recent as the first thing makes sense.

    [This validated my assumption]

    What do you do with this data? I export it into excel for reporting summaries/further analysis.

    [Adding a download button/feature (✔️from engineers)]

    Final Prototype

    New Events Monitoring prototype