Syringe Pump Interface Design: Updated Benchmarking and Evidence Review

Portrait of Dennis Lenard in the UX design agency.

Dennis Lenard

May 2026

An updated benchmarking of four syringe pump interfaces connecting design pattern analysis to clinical evidence on medication errors, alarm fatigue, and regulatory events through 2025.

This article draws on Creative Navy's project work in medtech UX, spanning practice management software, surgical equipment, ventilators, blood pumps, infusion systems, and patient monitoring devices, including Class II and Class III regulated products. Our work in this sector covers clinical environments including the ICU and operating theatre, designing for surgeons, nurses, and biomedical engineers. Dennis Lenard, who leads this work at Creative Navy, is the author of User Interface Design For Medical Devices And Software, the practitioner reference on UX design for medical devices and software. Our approach integrates IEC 62366 usability engineering requirements and FDA Human Factors guidance as structural inputs to the design process, not post-hoc compliance activities.

A nurse programmes a syringe pump during a medication round. The screen is small. Three buttons at the bottom look identical. The drug library alert fires; she overrides it, as she has done with hundreds of others this month. The infusion starts. Whether she entered the right rate, in the right unit, for the right patient weight: the interface gave her no confirmation that would distinguish a correct entry from an incorrect one.

This is not a hypothetical. It describes the structural conditions under which syringe pump interfaces operate in most hospitals. As of October 2025, approximately 90% of hospitalised patients in the United States receive intravenous medications or fluids. Direct observation of nurses using smart pumps across multiple hospitals found that 60% of medication infusions involved one or more errors, as of 2017. The interfaces mediating that process have changed less in the intervening years than the evidence about them.

This article updates and extends an earlier Creative Navy benchmarking study of syringe pump interfaces, incorporating evidence published through 2025. It reviews four devices examined in the original study, applies an expanded evaluation framework, and situates the findings within the clinical and commercial evidence that has since accumulated. The audience is product managers, clinical systems leads, and device UX decision-makers working at or with medical device companies.

Syringe Pump UX: What We Reviewed

Key Statistics

  • 90% of hospitalised US patients receive IV medications or fluids (AACN, October 2025)
  • 60% of medication infusions involved one or more errors in direct multi-hospital observation (Schnock et al., 2017)
  • 75.8% of soft alerts for high-alert medications overridden by nurses (Schnock et al., 2017)
  • 11 to 17 steps required to programme a routine saline infusion on the leading smart pumps then in clinical use (Giuliano, 2018)
  • One of four pump models in a 150+ VA hospital study generated five times more use errors than the others (Herrero et al., February 2025)
  • 9 to 15% of US hospitals have deployed smart pump EHR interoperability, as of 2024

The four devices reviewed here are the MedCaptain HP-30, the ISPLab01 Syringe Pump, the Harvard Apparatus PHD ULTRA, and the Digital Laboratory Syringe Pump dLSP500. Each appeared in the original 2023 benchmarking. This update applies an expanded framework connecting interface-level observations to clinical outcomes evidence.

The evaluation criteria are: visual hierarchy and information prioritisation; signifier clarity, meaning whether interactive elements are distinguishable from static labels; alarm and alert management; error recovery and confirmation design; workflow alignment with observed clinical use; and design currency, meaning whether the interface has been updated in response to clinical evidence since the original review.

An important correction applies to the PHD ULTRA. The manufacturer's user guide explicitly labels it "For research use only. Not for clinical use on patients." The 2023 article reviewed it alongside clinical-grade devices without noting this distinction. This update treats it separately and adjusts the scope of its findings accordingly.

The most common UX failure across syringe pump interfaces is the absence of visual hierarchy that establishes what the nurse needs right now versus what is contextual background. When flow rate, unit labels, alarm indicators, and device status compete for attention with equal visual weight, the user scans the screen rather than reads it. Scanning increases parameter misreading, step omission, and fixation on secondary values. This failure pattern appears across device categories and screen sizes.

That is the baseline condition against which the individual reviews below should be read. Each device is assessed against the criteria above, with the cross-cutting patterns examined in the comparative section that follows.

MedCaptain HP-30

The MedCaptain HP-30 uses a 3.0-inch touchscreen with two physical buttons. As of April 2025, the device is distributed in its original configuration with no public announcement of a UI redesign.

The home screen does one thing correctly: it makes the current flow rate the dominant visual element. In a category where screens frequently crowd parameters without rank, this is a deliberate and correct hierarchical decision. The clinical priority is readable without scanning.

The problems are in the interaction layer. Three buttons at the bottom of the screen share identical colour and size properties. No action is distinguished as primary. A nurse encountering the device for the first time has no visual signal for where to begin. The syringe brand label at the top doubles as a control for changing the brand with no signifier that a label is tappable. The rate number opens a flow-rate editing window when tapped, but nothing in the design suggests this is possible. Both failures produce trial-and-error navigation in a device context where hesitation costs time.

Menu navigation is handled better. Arrow buttons and dot indicators give orientation within nested screens. Sub-menu items carry arrow marks that signal depth. These conventions reduce the cognitive load of navigation for a device operated while attending to other tasks simultaneously.

What the original review framed as a hardware constraint (the 3.0-inch screen) is better understood as a prioritisation problem. The screen carries information that can be organised by clinical urgency. The HP-30 does this on the home screen. It does not sustain it through the interaction sequence.

ISPLab01 Syringe Pump

The ISPLab01 uses a 4.3-inch touchscreen with four physical buttons controlling mechanical functions only: start and stop. The separation between physical actuation and software navigation is structurally clean. It removes the class of errors that arise when users attempt to use physical buttons to navigate software states.

The home screen divides information into four quadrants: flow rate and status at top left; settings table at top right; parameters table at bottom left; navigation buttons at bottom right. The intention is information density. All relevant parameters on one screen. The effect is visual noise. The quadrant layout assigns equal weight to all four areas with no hierarchy signal to tell the nurse which quadrant matters most under clinical pressure.

The contrast between background and text is good. Buttons are clearly distinguishable from labels. These are not trivial qualities in a device category where unclear affordances produce trial-and-error behaviour. But contrast and clear buttons are conditions for readability, not evidence of usability. A screen can be readable and still fail to support the clinical task it is built around.

The ISPLab01's separation of physical and software control is the right structural decision. Its information architecture does not build on that foundation.

Harvard Apparatus PHD ULTRA

The PHD ULTRA requires a categorical note before the interface review. As stated in the manufacturer's user guide (publication 5419-002-Rev-1.0), the device is "For research use only. Not for clinical use on patients." The 2023 benchmarking did not flag this. The interface observations below apply to a laboratory device, not a clinical one, and the safety implications are different in kind.

The interface carries the visual character of an earlier generation of laboratory instrumentation. Vibrant colours reduce readability under normal working conditions. On the Quick Start screen, colour-coding of action buttons does useful categorical work: green indicates direct device control (start and stop); blue indicates settings and menu functions. This is one of the more coherent categorical signals across the four devices reviewed.

The Run screen does not sustain that coherence. The visual difference between parameter labels and parameter values is present but too slight for rapid reading. Button size dominates the layout at the expense of the parameter data the buttons act on. The guided text box providing next-step instructions is well-implemented, giving feedback through the process.

As of 2025, the PHD ULTRA remains Harvard Apparatus's primary current offering for this device category. The legacy Pump 22, Pump 33, and PHD2000 series were discontinued by 2020. The PHD ULTRA interface has not been redesigned.

Poor syringe pump UX contributes to medication errors through a specific sequence. An interface without clear visual hierarchy causes nurses to scan screens rather than read them. Under clinical time pressure, scanning produces parameter misreading, unit errors, and skipped confirmation steps. Nurses managing alarm fatigue override alerts they no longer trust. Workarounds emerge: temporarily doubling flow rate before patient connection creates overdose risk if the pump is not reset. The interface creates the conditions in which errors become probable.

Digital Laboratory dLSP500

The dLSP500 uses a 7-inch touchscreen configured as a standalone tablet connected to the device via cable rather than integrated into the device housing. This is the largest display in the original benchmark and it produces a clear result: screen area permits the kind of information separation that smaller screens cannot support.

The status bar carries logo, time, and user information. Side navigation replaces top navigation, which is the correct structural choice for a parameter-heavy device: side navigation scales to item count and is less disruptive to move through than a horizontal top bar during a workflow. Breadcrumb navigation shows the user's hierarchical position on screen. These are conventions from well-designed data-heavy software, applied consistently here.

The font colour choice undermines overall readability. Green text on a dark blue background does not meet adequate contrast standards for reliable reading under clinical conditions. The screen size compensates partially: large buttons and generous spacing reduce the risk of accidental input. But contrast is not a function of screen size. A contrast failure on a 7-inch screen is still a contrast failure.

The dLSP500's most significant design quality is structural: a navigation model that reflects how users move through a parameter-heavy workflow, not how the software was built. That principle transfers to smaller screens. The execution does not.

Why the Interface Stasis Gap Matters

The Interface Stasis Gap names the growing distance between two things that move at different speeds. Syringe pump interfaces remain largely unchanged across product generations. The clinical evidence quantifying the consequences of that stasis accumulates year by year. The gap is a compounding liability: every year a known interaction failure remains in a product, the documented cost of that failure grows.

The evidence is now specific enough to make this concrete. As of 2017, 60% of medication infusions observed directly across multiple hospitals involved one or more errors. Nurses overrode 75.8% of soft alerts for high-alert medications, as of 2017. As of 2018, the leading smart pumps in clinical use required between 11 and 17 steps to programme a routine saline infusion. A systematic review of more than 30,000 pumps across 150 VA hospitals, published in February 2025, found that one of four pump models generated five times more use errors than the others.

Meanwhile, regulatory events have converted what was clinical research into commercial risk. The BD Alaris system, holding approximately 55 to 60% of the US infusion pump market as of July 2023, was subject to a global recall before receiving FDA re-clearance. Smiths Medical issued urgent software correction letters for the Medfusion 3500 and 4000 syringe pumps in December 2023, citing alarm failures, screen lock failures, and re-administration of loading doses. Both events are traceable, in part, to the user interface layer.

The devices reviewed in this article sit within this regulatory and clinical context. Their interface decisions are not isolated design choices. They are decisions whose consequences can now be quantified.

Cross-Device Pattern Analysis

Across the four devices, three patterns hold regardless of screen size.

The first is uneven treatment of visual hierarchy. Every device organises its home screen around some version of the most clinically important parameter. None does it consistently across the full interaction sequence. The breakdown point is always the same: once the nurse enters an editing or menu state, the clinical priority signal disappears and the interface reverts to structural organisation by function rather than urgency.

The second is inconsistent use of signifiers. The HP-30 and the PHD ULTRA both present tappable elements without distinguishing them from non-interactive labels. The ISPLab01 handles button affordances correctly; it fails on information hierarchy. The dLSP500 handles both reasonably; it fails on contrast. None of the four devices reviewed does all of these correctly.

The third is design currency. Three of the four devices have not been redesigned since the original review. For the HP-30 and the PHD ULTRA, public information confirms no redesign is announced as of 2025. The clinical evidence that has accumulated during that period is not reflected in their interfaces.

DeviceScreenVisual HierarchySignifier ClarityDesign Currency
MedCaptain HP-303.0 inGood on home; breaks in menusWeak: labels double as controlsUnchanged, April 2025
ISPLab014.3 inPoor: equal-weight quadrant layoutGood: buttons clearly markedNo update data found
PHD ULTRA (research only)N/A laboratory devicePartial: colour coding inconsistentModerateUnchanged, 2025
dLSP5007.0 inGood: structured layout with breadcrumbsGood: large targets, clear separationNo update data found

The patterns visible in syringe pump interfaces: crowded screen layouts, inconsistent navigation models, unclear signifiers for interactive elements, are not specific to this device category. Medical device information architecture benchmarking documents the same structural failures across device types. We have documented the same alarm presentation failures and information hierarchy problems in dialysis machine interfaces, where the clinical stakes are comparable. Dialysis machine UX benchmarking provides the cross-category evidence.

Screen Size Is Not the Problem

The original 2023 article concluded that screen dimension is "the most significant limitation in designing excellent UX for this type of device." That conclusion is incorrect, and the evidence accumulated since 2023 makes the correction possible.

Screen size is a constraint. It is not the determining factor in whether a syringe pump interface causes errors. The strongest counter to the screen-size argument comes from the error-rate data: the Schnock et al. observation study found 60% error rates, as of 2017, on smart pumps with screens broadly comparable to those benchmarked in the original article. A larger screen does not reduce alarm override rates. It does not prevent unit-of-measurement entry errors. It does not make a drug library more likely to be used rather than bypassed.

What we observe consistently in medical device clients is just how large the distance is between the design team's assumptions and the conditions in which the device actually operates. Nurses work under time pressure, amid alarm noise, across multiple concurrent tasks. Entire workflows are bypassed by nursing staff: not because of insufficient training, but because they have found a faster path through the interface than the one the design team imagined. The most fundamental issue is not the screens themselves but the conditions in which those screens are used: conditions that had not been observed during design. Contextual inquiry and clinical field observation are the methods that close that gap. Regulatory compliance checklists are not a substitute for them.

The resistance we most consistently encounter when proposing this kind of observation to a syringe pump or infusion device team is framed around resource consumption. Contextual observation is viewed as an added cost and a delivery delay, not as risk mitigation. There is a tendency to treat regulatory compliance testing as a proxy for validated usability. These are two different things. One confirms that the device passes a defined test. The other tells you whether nurses operating under real clinical conditions can use it without error.

Measuring the cognitive load that a syringe pump interface imposes is not a matter of counting buttons: it requires objective assessment of visual complexity, density, and information hierarchy. Objective interface complexity measurement makes this assessment computable rather than inferential.

The counterargument to this position has real force. Screen size does constrain what can be displayed simultaneously, and design solutions available on a 7-inch screen are not all available on a 3-inch screen. A larger screen does not solve the clinical problem, but it does expand the solution space. The practical reality is that most installed clinical pumps operate on screens between 3 and 5 inches and will continue to do so for the duration of their service lives. Locating the problem in hardware is a deferral: it places the solution in the next generation rather than in the design decisions available now.

Design Principles for Syringe Pump UX

Four principles apply regardless of screen size.

Establish a single visual authority on every screen. The parameter the nurse needs at this moment in the workflow must be the visually dominant element. Not the most recently updated parameter. Not the parameter the software architecture makes easiest to surface. The one the clinical task demands right now. The HP-30 achieves this on the home screen. It does not sustain it.

Separate interactive elements from display elements without ambiguity. Every tappable element must carry a visible affordance. Every label that is not interactive must be visually inert. Hiding controls inside labels, or presenting buttons without distinguishing visual weight, produces trial-and-error navigation. Under clinical time pressure, trial-and-error navigation produces errors.

Design for override behaviour, not against it. Nurses as of 2017 override 75.8% of soft alerts for high-alert medications. An interface that generates alerts without the contextual information needed to evaluate them will produce override rates that approach 100%. Smart pumps at one institution generated alarms for one-third of their usage, producing more than 106 hours of alerts per month, as of 2018. An alert design that interrupts a clinical task without providing clinical value is a design failure. It is not a compliance problem.

Treat interoperability as an interface decision. Evidence from a systematic review published as of November 2024 shows that smart pump EHR interoperability reduces directly attributable medication administration errors by between 15.4% and 54.8%. A 2025 health economic model estimated this prevents 56 preventable adverse drug events annually at a 1,500-bed health system, saving approximately $531,891. The interface layer is where interoperability succeeds or fails in practice. The data entry workflows and confirmation patterns preceding an auto-programmed infusion determine whether the integration delivers its intended clinical benefit.

The commercial picture reinforces these priorities. As of December 2023, Ivenix (Fresenius Kabi) signed a multiyear agreement with Mayo Clinic for 10,000 large-volume pumps, positioned explicitly on UX improvement as a market differentiator. Baxter's Novum IQ syringe pump received FDA clearance as of April 2024, positioned on EHR integration and auto-programming as error-reduction mechanisms. These are product launches where interface design is named as competitive rationale, not treated as a downstream concern.

Limits and Gaps

Three gaps in the current evidence are genuine and should be named.

The causal link between specific interface patterns and specific error types is documented at the population level but not isolated at the interaction level. The Schnock error-rate data is observational: it quantifies frequency, not mechanism. Establishing which specific interface decisions cause which specific errors in which clinical contexts requires prospective interaction-level research that has not been conducted at scale.

The benchmarking in this article is observational. It identifies design patterns that align with known failure mechanisms; it does not test whether modifying those patterns changes error rates. The Herrero et al. VA study (February 2025) is the closest available evidence to a controlled comparison, showing a five-times disparity in use error rates between pump models, but it does not isolate the interface variable from other differences between those models.

The devices reviewed here do not include the BD Alaris Guardrails system after its 2023 relaunch, the Baxter Novum IQ with Dose IQ software (cleared April 2024), or the Ivenix Plum 360. All three have either been redesigned or newly launched with UX as explicit design rationale. A meaningful updated benchmarking of those devices would require access to current interface documentation not publicly available. That comparison is what this analysis now requires.

Conclusion

The original 2023 benchmarking identified real interface failures across four syringe pump devices. This update changes the significance of those findings, not their substance. The failures are not isolated observations about specific products. They are instances of a pattern that clinical evidence now quantifies at scale: interfaces not designed around observed clinical use, unchanged in response to the evidence about their failure modes, operating in a market where competitors are naming interface quality as primary competitive rationale.

The nurse at the start of this article, overriding an alert she no longer evaluates individually, is not an outlier. She is the median case. The interface around her was designed for a procedure that proceeds without interruption, without competing alarms, without the time pressure that makes a five-step shortcut worth the risk. The clinical literature now documents what she already knows: the design and the practice have separated, and the gap is widening.

The Interface Stasis Gap is not a design problem in isolation. When the market-leading pump requires a global recall and re-clearance, when urgent software correction letters cite alarm failures, and when major health systems sign multiyear agreements with competitors on the basis of UX differentiation, the competitive context has changed. The question for product organisations is no longer whether interface improvement is worth investing in. It is whether the current design survives the scrutiny that the evidence and the market are now applying.

The path requires observation before design. Not interviews after the interface is built, and not regulatory compliance testing as a proxy for clinical validity. Direct observation of how nurses operate the device under real conditions: under time pressure, alarm fatigue, and the multi-task load that makes a cluttered screen into an error waiting to happen. That is what the clinical evidence consistently identifies as absent from the design process. It is what the interface failures consistently reveal.

Frequently Asked Questions

What are the most common UX failures in syringe pump interface design?

Across the devices benchmarked here, the most consistent failures are absent visual hierarchy between parameters of different clinical priority, unclear affordances for interactive elements, and alarm designs that produce high override rates. Schnock et al. (2017) found that nurses overrode 75.8% of soft alerts for high-alert medications, which is a reliable signal that alert design is not meeting clinical needs rather than that nurses are not following protocol.

How does IEC 62366 apply to syringe pump interface design?

IEC 62366 requires manufacturers to conduct usability engineering throughout device development, including formative evaluation during design and summative validation before market entry. The standard does not specify screen size or layout; it requires that use errors anticipated from the intended use environment are identified and addressed. The gap between laboratory summative testing under IEC 62366 and real clinical conditions is a documented limitation. The standard does not require that observation occur in the live clinical environment; it requires that the simulated test conditions reflect that environment adequately.

Why does the BD Alaris recall matter for interface design?

The BD Alaris system held approximately 55 to 60% of the US infusion pump market as of July 2023, when it received FDA re-clearance following a global recall involving software and user-interface-related difficulties among other factors. At that market share, interface failures in the Alaris system have population-scale consequences. The recall is the most direct recent evidence that interface problems in infusion pumps are a regulatory and commercial event, not only a clinical observation.

What is the FDA MAUDE database and why is it relevant here?

The FDA MAUDE (Manufacturer and User Facility Device Experience) database records adverse events and near-misses for medical devices, including infusion pumps and syringe pumps. It is the primary public record of reported interface-related device failures in the US. Patterns in MAUDE data for pump categories, including alarm failures and use errors associated with drug library bypass, provide post-market evidence of interface failure modes that pre-market usability testing did not eliminate.

What would a meaningful update to this benchmarking require?

A rigorous updated benchmarking would include the current BD Alaris Guardrails interface after its 2023 relaunch, the Baxter Novum IQ with Dose IQ software (FDA clearance April 2024), and the Ivenix Plum 360. It would apply direct observation of nurses using each device in a simulation environment equivalent to the Giuliano (2016) study, rather than inferring use error risk from interface pattern analysis alone. That study has not been conducted.

When does a syringe pump interface redesign become a regulatory obligation?

A redesign that changes the user interaction model requires a new IEC 62366 summative evaluation and, depending on the scope of changes and the device's regulatory classification, may trigger a 510(k) submission or PMA supplement under FDA rules, or a substantial modification notification under EU MDR Article 120. The validation cost of redesign is a real constraint. What most product organisations lack is a structured method for answering when the accumulated risk of not redesigning exceeds that cost.

References

Schnock, K. O., Dykes, P. C., Albert, J., Ariosto, D., Call, R., Cameron, C., et al. (2017). The frequency of intravenous medication administration errors related to smart infusion pumps: A multihospital observational study. BMJ Quality & Safety, 26(3), 195-202. https://qualitysafety.bmj.com/content/26/3/195

Giuliano, K. K. (2018). Intravenous smart pump drug libraries: Recommendation and guidance for library development, content, and maintenance. Critical Care Nursing Clinics of North America, 30(2), 145-159. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4850078/

Giuliano, K. K. (2016). Usability testing of smart pump user interface redesign. Applied Nursing Research, 29, 50-55. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4850078/

Herrero, A., et al. (2025). Comparative use error rates across infusion pump models in Veterans Affairs hospitals. Frontiers in Digital Health. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11841431/

Skog, M., et al. (2025). Impact of smart pump EHR interoperability on medication administration errors: A systematic literature review. PubMed (PMID 40256649). https://pubmed.ncbi.nlm.nih.gov/40256649/

Borrelli, E. P., et al. (2025). Health economic model of smart pump interoperability at a 1,500-bed health system. ClinicoEconomics and Outcomes Research. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12301143/

American Association of Critical-Care Nurses. (October 2025). IV medication administration statistics [Press release]. https://www.aacn.org

Rhode Island Anesthesia Services. (June 2024). Syringe pumps: Overview, clinical uses, and safety considerations. https://rianesthesia.org

U.S. Food and Drug Administration. (December 2023). Smiths Medical Medfusion 3500 and 4000 urgent field safety correction [Recall notices]. https://www.fda.gov/medical-devices/recalls-corrections-and-removals-devices

Harvard Apparatus. (2025). PHD ULTRA syringe pump series user's guide (Publication 5419-002-Rev-1.0). Harvard Bioscience. https://www.harvardapparatus.com

Van der Sluijs, A. F., et al. Standardisation of syringe pump change procedures. In Making healthcare safer III (NBK555506). Agency for Healthcare Research and Quality. https://www.ncbi.nlm.nih.gov/books/NBK555506/

In this story

Syringe pump interfaces have changed little while clinical evidence of their failure modes has grown. This updated review of four devices assesses visual hierarchy, signifier clarity, alarm management, and design currency against peer-reviewed research and regulatory records through 2025, and introduces the Interface Stasis Gap as the governing pattern.

22 min read

You might also like

Help Design in Medical Devices: What the Benchmark Reveals
Medtech & Healthcare Design

Help Design in Medical Devices: What the Benchmark Reveals

UI software errors cause 5.44% of all medical device recalls. This benchmark of five platforms shows where in-device help systems compensate for interface failure rather than prevent it, and where competitive advantage is being left unclaimed.

21 min read
Dialysis Machine UX: What the Interface Gap Is Costing Your Organisation
Medtech & Healthcare Design

Dialysis Machine UX: What the Interface Gap Is Costing Your Organisation

Dialysis machines require six to eight weeks of training to operate not because kidney replacement therapy is complex but because the interfaces were designed around software logic rather than clinical task sequences.

20 min read
Industrial Robot Controller UX Is Now a Competitive Variable
Industrial GUI

Industrial Robot Controller UX Is Now a Competitive Variable

Poor robot controller UX is measurable in downtime. This benchmarking analysis shows where the market has split between structural redesign and surface-level refresh, and what that means for product directors evaluating controller platforms.

20 min read