This article draws on Creative Navy's project work in complex technical and scientific software, spanning computational fluid dynamics, surgery planning systems, scientific research software, CAD/CAM platforms, circuit simulation, vessel tracking systems, air traffic control, and mission control environments. We have designed for demanding technical experts such as CFD analysts, circuit design engineers, surgeons, air traffic controllers, mission controllers, and maritime operators. A central competency in this work is the visualisation of complex, dynamic, multi-dimensional data under real operational conditions, where clarity and precision directly affect decision quality. Several of these environments are governed by specific human factors standards, including EUROCONTROL and ICAO guidelines for ATC, IEC 62366 for medical software, and NASA requirements for mission-critical systems.
The export takes twelve seconds. The researcher clicks through the MTEX import wizard, selects a reference frame option that sounds correct, and runs the texture analysis. The pole figures look plausible. They look, in fact, exactly like the results they were hoping for. A colleague at a different institution runs the same dataset through the same pipeline and produces a mirror image. Two days and four community forum threads later, the answer is on the LightForm Wiki: the MTEX import wizard presents eight reference frame options and provides no guidance on which is correct for data exported from AZtec. The default is wrong. It has been wrong for every user who did not already know to fix it.
This is not an isolated incident. It is a documented pattern, repeating across institutions, software versions, and user populations with a consistency that removes any question of coincidence.
Between 2024 and 2026, a credibility reckoning is under way for scientific instrument software. Community documentation has displaced vendor documentation as the authoritative source for the most important parts of the analysis pipeline. Forum threads, institutional wikis, and a community-maintained MATLAB library have collectively filled the guidance gaps that vendor software left open. We examine why this moment is commercially significant, and what it means for vendors who have not yet responded to it, in our related article on the strategic role of UX in scientific software.
This article presents the Eight-Theme Failure Taxonomy for Scientific Instrument Software: a structured framework built from multi-source research across EBSD and EDS software. It is written for software leads, product managers, and instrument teams who want to understand what is actually wrong and in what order the failures matter. Each theme is a reproducible failure mode. Taken together, they reveal a pattern that is not about scientific complexity. It is about interface design.
Key Statistics
- 8 failure themes identified across EBSD/EDS software from peer-reviewed literature, institutional SOPs, GitHub issues, and community forum analysis
- Coordinate system confusion is the single most-raised issue across Reddit, ResearchGate, mailing lists, and GitHub discussions spanning the entire evidence corpus
- MTEX 6.0 (as of October 2024) introduced multiple spatial reference systems and the how2plot convention system: the first structural improvement for coordinate system workflows in the post-2024 period
- CTF round-trip failure (MTEX GitHub Issues 478 and 479): MTEX-exported CTF files cannot be reimported into AZtecCrystal, a failure that remains unresolved as of March 2026
- AZtec 6.2 release notes (as of February 2025) contain no mention of Auto-Clean safeguards or audit trail features, despite vendor blog acknowledgement that aggressive Auto-Clean use leads to paper rejection
- Community documentation carries the correction burden for the three highest-intensity failure themes: coordinate system confusion, data cleaning risk, and export interoperability are all absent from Oxford video tutorial content
- AZtecFlex licences (as of 2025): student annual licence $139, standard $1,399, a cost barrier for the facility users who represent the widest distribution of the software across institutions
How EBSD Software Is Structured
EBSD software is not a platform. It is an involuntary consortium of tools that were never designed to work together.
The ecosystem divides into four functional strata: acquisition platforms (AZtec, Bruker Esprit, EDAX APEX), post-processing environments (AZtecCrystal, Channel 5, OIM Analysis, MTEX), advanced analysis tools (MTEX, CrossCourt4, DREAM.3D/DREAM3D-NX, HyperSpy), and simulation integration packages (DAMASK, VPSC, CPFEM codes). A researcher completing a publication-ready analysis typically passes through three to five tools across three to four of these strata. The tools share no common session state, no shared provenance model, and no agreed coordinate system convention.
Every crossing between strata is a manual handoff. The universal handoff point is file export. AZtec produces a .ctf, .h5oina, or raw .ebsp file. The AZtec session is typically closed. Everything that happens to the data downstream carries no formal link back to the acquisition record. This pattern is so consistent across institutions and domains that it has a name in facility documentation: the workflow seam. It is not documented as a limitation in any vendor material.
Who the Vendor Research Does Not Reach
Vendors conduct usability research with the users they can reach: the expert crystallographers and materials scientists who attended the training, who use the software daily, and who have accumulated the workarounds. They are not reaching the PhD student who uses the departmental SEM once a month, the facility manager who supports twenty researchers across five disciplines, or the postdoc from a computational background running EBSD for the first time on a borrowed instrument time slot.
This is the widest-distribution user population in the field. It is almost entirely absent from vendor-conducted research. The gap between the user model built into the software and the user actually sitting at the microscope is the structural source of most of the failures described in this article. Contextual inquiry and field observation conducted across the actual user population rather than the expert minority is the method that closes this gap.
Having established both the ecosystem structure and the research gap it creates, the question becomes: what does the evidence show when that wider population encounters the software? The answer is eight reproducible failures.
The Eight-Theme Failure Taxonomy
The framework presented here is built from peer-reviewed literature, institutional standard operating procedures, GitHub issue trackers, ResearchGate Q&A threads, and community documentation maintained by facilities including the LightForm group at the University of Manchester and the Warren Lab at the University of Delaware. The eight themes are not eight bugs. They are eight categories of interface design failure that repeat across vendors, software generations, and user populations.
Theme 1: Coordinate System Confusion
The most common UX failure in EBSD software is coordinate system confusion. AZtec, MTEX, OIM Analysis, and DREAM.3D each use a different default reference frame. A researcher who exports from AZtec and imports into MTEX without a manual correction will produce orientation maps that look correct but are wrong. The failure is silent, has entered published literature, and is documented only in community resources. No vendor software flags it at the point of crossing.
The scope is wider than it first appears. For hexagonal materials, an additional 30° ambiguity exists between Oxford and EDAX conventions that applies to all titanium, zirconium, and hexagonal mineral analysis. CrossCourt4 uses a different Euler angle frame from AZtec's default, which means HR-EBSD elastic strain and stress tensor calculations are computed in the wrong plane if the correction is not applied. This is the highest-consequence silent failure mode in the corpus. The DREAM.3D documentation states the consequence directly: Oxford CTF data requires explicit rotation filter application, and results are "wrong" without it. The word "wrong" appears in vendor documentation for this specific failure. It does not appear in the import interface.
MTEX 6.0 (as of October 2024) introduced multiple spatial reference systems and the how2plot convention, the most significant structural improvement in this area in years. The problem is not unsolvable. It is unsolved in the interface.
Theme 2: Destructive Cleaning Without Warnings
AZtecCrystal's Auto-Clean function produces scientifically invalid data when applied without expert knowledge of its parameters. This is not a fringe risk. It is explicitly acknowledged in an Oxford Instruments blog post, which states that aggressive cleaning makes features look artificial and that lack of any mention of data cleaning leads reviewers to dismiss papers' scientific value.
AZtec 6.2 release notes (as of February 2025) contain no mention of Auto-Clean safeguards or audit trail features. The in-software interface presents Auto-Clean as a standard workflow option. The warning exists in a blog post. It is not in the software.
A related failure compounds this: the cleaning level scale runs from 1 to 8, but the scale is inverse and non-intuitive. The most commonly reported misunderstanding is that users run level 8 first, assuming higher numbers correspond to more thorough cleaning. The actual convention reverses this. There is no in-software indication of the scale's direction.
Crystal cannot store a cleaning routine for reuse on subsequent maps. Each cleaning session is manual, undocumented, and non-reproducible by design.
Theme 3: The Broken Transfer Chain
Data integrity is not guaranteed across format conversions. This is not a marginal failure: it is the defining structural characteristic of the EBSD analysis pipeline.
MTEX-exported CTF files cannot be reimported into AZtecCrystal (MTEX GitHub Issues 478 and 479, as of March 2026). The error message reads: "Microscope acquisition parameters not available." The workflow is strictly one-directional. The .oipx format converted to .ctf for TSL OIM Analysis loses access to IPF and grain data, showing only image quality. The full AZtec → Channel 5 → OIM Analysis → MTEX conversion chain loses grain information at the first step, corrupts the coordinate system at the second, and fails to reimport at the third. No single documented safe path through the full chain exists.
Acquisition metadata is silently dropped at each conversion: accelerating voltage, working distance, step size, and tilt angle are all lost or partially preserved depending on which format pair is involved. The burden of verifying that data integrity has survived the transfer falls entirely on the user.
Theme 4: The Missing Audit Trail
Cleaning steps, threshold values, coordinate frame choices, and parameter selections leave no machine-readable record in the exported file. Published papers routinely omit cleaning details; a 2018 Oxford Instruments blog post identifies this omission as the reason reviewers dismiss papers. Grain size results vary between laboratories running the same protocol because the GOS threshold, grain boundary misorientation angle, and minimum pixel count are all user-settable without software-enforced defaults or output logging.
The grain boundary misorientation threshold is not embedded in exported outputs. The GOS threshold for recrystallised fraction calculation varies across the literature at 1°, 2°, and 5°, with no software guidance on selection. The minimum pixels-per-grain threshold for grain size measurement diverges between ISO 13067 (10 pixels) and ASTM E2627 (100 pixels), a tenfold difference that changes published results. The software accepts any value. None of these choices is flagged as consequential.
Theme 5: Phase ID and Mental Model Violations
Multiple interface features produce incorrect output in conditions the software does not communicate.
TruPhase, the EDS-EBSD combined phase identification system, is inapplicable when grain size falls below the interaction volume difference between the EDS and EBSD signals. The software produces output regardless. There is no in-software warning. The condition under which TruPhase is invalid is disclosed only in technical papers and advanced training, not in the interface.
Confidence Index is misunderstood by a significant proportion of users as a preparation quality metric. It is a phase discrimination metric. When two phases in the expected phases list are structurally similar, CI will be low even if pattern quality is excellent. The software does not surface this distinction. Users concluding that low CI indicates poor sample preparation will repeat preparation unnecessarily and misattribute systematic indexing failures to technique.
The Auto-Clean assumption, the TruPhase assumption, the CI assumption, and the coordinate-system-handles-itself assumption share a structure: the interface presents a feature as though it works unconditionally, and the conditions under which it fails are documented elsewhere.
Theme 6: Expert Paths for Routine Operations
The following operations are conceptually routine but require MATLAB scripting expertise to perform correctly in the MTEX environment: coordinate frame verification, parent grain reconstruction, geometrically necessary dislocation density calculation, Dauphiné twin correction in quartz, and custom grain size filtering. None has a GUI equivalent in the current toolset.
Parent grain reconstruction is documented in a dedicated peer-reviewed paper (Niessen et al., Journal of Applied Crystallography, 2022). The paper exists because the parameterisation is too complex to be self-explanatory. It has become a required citation. This is an expertise bottleneck that has been formalised as a publication.
The Sheffield beta reconstruction executable, a third-party tool maintained by a specific research group at the University of Sheffield, is a named dependency in the titanium workflow documentation. Researchers who need beta reconstruction must download a tool whose continued maintenance depends on the ongoing activity of one academic group. This is not an edge case in the geology or materials science communities. It is the documented standard workflow.
Theme 7: Updates That Break Workflows
MTEX 4.0 broke .osc file import without announcement; users who had built their analysis pipeline on MTEX 3.5 discovered this mid-project. An AZtec software update broke Oxford-to-EDAX pattern file transfer; the issue (HyperSpy Discussion 214) remains of uncertain resolution as of March 2026. The Channel 5 .cpr/.ctf export option was disabled in AZtec without announcement; users discovered the change mid-workflow. DREAM.3D Classic was discontinued without a migration tool to DREAM3D-NX.
No version compatibility matrix is maintained by any vendor or tool in the ecosystem. The result is a culture of version pinning: researchers identify a software configuration that works for their specific workflow and refuse to update, knowing that an update may break the pipeline without warning or guidance on how to restore it. This culture directly suppresses adoption of improvements.
Theme 8: The Acquisition Promise
AZtecSynergy acquires EDS and EBSD data simultaneously. The product proposition is that chemical and crystallographic information are collected in a single pass. Re-indexing severs the linkage.
The documented workaround, in the Warren Lab standard operating procedure at the University of Delaware, runs to seven steps per phase: re-index, generate EDS subset masks, nullify non-target phase data in Tango, apply noise reduction, save under a different name, and repeat for every additional phase before finally assembling the maps in MapStitcher. A four-phase sample requires four separate re-indexed projects, four subset masks, four noise-reduced exports, and one manual assembly pass. The simultaneous acquisition did not close the seam.
Oxford Instruments' own 3D EBSD documentation acknowledges that one of the major challenges is reconstruction of data into a full 3D volume due to offset between layers. No in-AZtec solution is documented.
Where Community Evidence Points
The eight failure themes were identified through multi-source triangulation across forum analysis, peer-reviewed literature, changelog review, and community documentation spanning ResearchGate Q&A threads from 2014 to 2024, MTEX GitHub issue trackers, institutional standard operating procedures from the LightForm group at the University of Manchester and the Warren Lab at the University of Delaware, and the DREAM.3D and CrossCourt4 vendor documentation. The methodology behind this kind of multi-source research, and what it reliably produces in practice, is illustrated in detail in our work on how user research informs design decisions.
The most important finding from that triangulation is not the failures themselves. It is the correction burden distribution. The three highest-intensity failure themes, coordinate system confusion, data cleaning risk, and export interoperability, are all absent from Oxford Instruments' official video tutorial content. Community documentation carries all three. The LightForm Wiki documents the iterative cleaning protocol, the MTEX coordinate system correction requirement, and the GOS threshold selection rationale. The MTEX GitHub issues tracker documents the CTF round-trip failure. The Oxford Instruments blog, in a post written by a named employee, documents the Auto-Clean risk.
This distribution is not an accident of content strategy. It is the natural outcome of a user model that ends at the expert. When the vendor model of their user is someone who already knows the workaround, there is no reason to build the warning into the software. The problem is that this model is wrong for the majority of people running EBSD at shared facilities today.
Complexity Does Not Cause These Failures
The standard defence of scientific instrument software when confronted with this kind of analysis is that the domain is genuinely too complex for standard UX methods to address. EBSD involves crystallography, Kikuchi diffraction geometry, multiple coordinate frame conventions, and materials science domain knowledge. It is argued that the expertise barrier makes conventional interface design inapplicable.
This argument does not hold under examination.
The eight failure themes repeat across vendors, software generations, and user populations precisely because they are interface design failures. Domain complexity explains what the software must do: it does not explain why the interface presents Auto-Clean without a warning that the vendor has already written, or why the MTEX import wizard presents eight reference frame options with no guidance when the correct choice for each vendor's export format is entirely deterministic. These are not complexity problems. They are decisions about what information to surface to the user and when.
The most instructive comparison is MTEX 6.0 (as of October 2024). The how2plot convention system is a structural improvement that addresses coordinate system usability without changing a single algorithm. The science did not change. The interface changed. The failure rate for the most commonly reported problem in the field can be reduced by a design decision. That is a design problem, not a domain complexity problem.
A parallel observation comes from our own work. Across scientific instrument software projects, the finding that consistently surprises vendors is not that their software has bugs. It is that the workflows their engineers designed are not the workflows their users want to execute. In practice, users fight the software to reach the analysis they need. First-time users encounter contradictions without context, and the frustration this creates makes learning harder, not just slower. What consistently surprises vendors is that younger researchers, who recognise immediately that the interface is encoding an assumption that does not match their task, simply switch to a less capable tool rather than adapt to the wrong model. Vendors find this difficult to accept because it implies that years of engineering decisions were built on a user model that was never validated.
What Vendors Should Do Differently
Four principles follow directly from the taxonomy. They are stated as design requirements, not suggestions.
Principle 1: Surface the Boundary
Every crossing between software tools involves a coordinate frame conversion. At every export point, the software knows which tool the data is going to (based on file format selection) and which frame the current data is in. The conversion is deterministic. Apply it automatically, and surface the transformation that was applied so the user has a record. For cases where the destination tool is unknown, present the eight options with explicit guidance on which to select for each downstream tool. Remove ambiguity from the single most commonly reported failure mode in the field.
Principle 2: Guard Destructive Operations
Auto-Clean is destructive. The warning already exists in vendor documentation. Move it into the software. Require confirmation that includes the specific risk: "Auto-Clean cannot be undone and will change your results. Oxford Instruments recommends understanding the consequences before applying it." Add the same structural protection to cleaning level selection by surfacing the scale direction and the rationale for the standard iterative protocol.
Principle 3: Embed the Audit Trail
Scientific instrument software vendors should embed the audit trail in the file format, not rely on the researcher to reconstruct it. Every cleaning step, threshold value, and coordinate frame choice should be written into the exported file metadata as a machine-readable record. This costs nothing algorithmically and changes nothing about the analysis logic. It addresses reproducibility failures and makes methods sections more complete as a side effect.
What remains genuinely uncertain is whether vendors will adopt this in the absence of regulatory pressure or a competitive differentiator that makes the audit trail a feature rather than an overhead. There is no clear answer yet on which of those two conditions will arrive first.
Resolving the eight themes requires systematic UX transformation rather than isolated feature fixes, because the failure modes share structural causes: an incomplete user model, documentation that covers only success conditions, and no mechanism for surfacing consequential decisions at the moment they are made. A structured approach to UX transformation, beginning with a full audit of current failure modes against the actual user population, is what distinguishes substantive change from cosmetic interface refresh.
Principle 4: Document Failure Conditions
Official documentation currently describes how to use features when they work. It does not document when phases are too similar for reliable indexing, when TruPhase is inapplicable, when Hough indexing will fail, or when coordinate conversion is required. Every major feature in EBSD software has known conditions under which it produces incorrect results. Documenting those conditions is not a liability admission. Leaving them undocumented, while the community documents them instead, is a reputational position that has already been taken, and the question is only whether vendors will choose to recover it.
Limits and Gaps
This taxonomy was built from sources that are indexed and accessible: forum threads, GitHub issues, published papers, and facility SOPs. YouTube comment sections were not indexed and represent a documented gap. Direct facility interviews were not conducted for this research phase; the taxonomy reflects what users report publicly, which may underrepresent failures that researchers consider too embarrassing to document, or failures that are silently accepted as normal.
Several themes are more strongly evidenced than others. Coordinate system confusion (Theme 1) and the missing audit trail (Theme 4) have the broadest and most independent corroboration across the corpus. The simultaneous acquisition promise (Theme 8) is well-evidenced in the Warren Lab SOP and the AZtecSynergy product documentation, but the full scope of labs that have encountered this failure is unknown.
The taxonomy does not yet address semiconductor and fine-feature EBSD workflows, where the failure mode profile may differ from the metals and geology domains that dominate the evidence base. This is a genuine gap: three searches for semiconductor/solder/intermetallic phase identification with AZtec in published papers returned insufficient results for inclusion.
The conclusions of this article would not hold for a software environment in which the user population is entirely composed of expert crystallographers who run the software daily. Under those conditions, most of the community documentation workarounds are effective and the cost of the current design is absorbed as expert overhead. The problem is that this is not the actual user population at most facilities, and it is less true every year as EBSD instruments become more widely distributed.
We examine the most consequential of these failures, the case where software produces scientifically invalid output without any error state, in the next article in this series.
Conclusion
A researcher began this article at a microscope, facing pole figures that looked correct. They were wrong. Two days of forum investigation later, the answer was in a community wiki that exists because the vendor did not put the answer in the software.
The Eight-Theme Failure Taxonomy does not describe a technology that has reached the limits of what design can address. It describes eight design decisions, and absences of decisions, that have accumulated into a user experience that systematically fails the people who need it most: the facility user, the PhD student, the researcher working outside their primary expertise. These users are the widest distribution of scientific instrument software in the world. They are the least likely to find the LightForm Wiki.
Vendors who understand this are positioned to do something about it. The improvements that address the most critical failures require no changes to any algorithm: surface the coordinate frame conversion, move the Auto-Clean warning into the software, embed the audit trail in the file. The science does not need to change. The interface does.
Frequently Asked Questions
What is the most common UX failure in EBSD software? Coordinate system confusion is the single most-raised issue across all evidence sources, including ResearchGate, Reddit, mailing lists, and GitHub discussions. AZtec, MTEX, OIM Analysis, CrossCourt4, and DREAM.3D each use a different default reference frame. Crossings between tools without manual correction produce orientation maps that look correct but are wrong. The failure is silent and has appeared in published literature. It is documented in community resources, not in vendor software.
Why does Auto-Clean in AZtecCrystal produce invalid data? Auto-Clean applies all available cleaning operations without user control over parameters. Oxford Instruments acknowledges in a vendor blog post that aggressive Auto-Clean use makes microstructural features look artificial and that papers omitting cleaning details are dismissed by reviewers. AZtec 6.2 release notes (as of February 2025) contain no in-software warning. The risk is real, vendor-acknowledged, and not surfaced at the point of use.
What is the CTF round-trip failure in MTEX? MTEX-exported CTF files cannot be reimported into AZtecCrystal (MTEX GitHub Issues 478 and 479, as of March 2026). The error message reads "Microscope acquisition parameters not available." The implication is that the AZtec-to-MTEX workflow is strictly one-directional: once data has been exported and processed in MTEX, it cannot be returned to AZtecCrystal for further processing.
Why do AZtec and MTEX produce different pole figures for the same dataset? AZtec uses a beam/camera coordinate system for map display but calculates orientations in the acquisition coordinate frame. MTEX uses a different default reference frame. Without manual correction during MTEX import, the exported orientations will be in the wrong frame. For hexagonal materials, an additional 30° rotation discrepancy exists between Oxford and EDAX conventions that affects all titanium, zirconium, and hexagonal mineral analysis.
What does the Eight-Theme Failure Taxonomy cover? The Eight-Theme Failure Taxonomy for Scientific Instrument Software identifies eight reproducible categories of interface design failure across EBSD and EDS software: coordinate system confusion, destructive cleaning without in-software warnings, broken export/import chains, the missing audit trail, phase identification and mental model violations, expert-only paths for routine operations, update-driven workflow breaks, and the simultaneous acquisition promise versus pipeline reality.
How should vendors approach fixing these failures? Priority order follows evidence intensity. Coordinate system confusion and the missing audit trail are the most widely corroborated failures and require no algorithmic changes: surface the transformation at every export boundary, and write cleaning parameters and threshold values into file metadata. Auto-Clean guarding is the third priority: move the existing vendor warning into the software interface. Documentation of failure conditions follows: every major feature has known boundary conditions that are currently absent from official resources.
References
Cross, A.J., Hirth, J.P., & Tullis, J. (2017). Low-temperature plasticity of quartz: Bridging laboratory and crustal conditions through microstructural analysis. Journal of Geophysical Research: Solid Earth, 122(9), 7252-7280. https://doi.org/10.1002/2017JB014166
Davis, A.E., Brough, I., Donoghue, J., Hadley, M., & Robson, J.D. (2019). Deformation and texture of Ti-6Al-4V at cryogenic temperature. Materials Science and Engineering: A, 765, 138247. https://doi.org/10.1016/j.msea.2019.138247
LightForm Group, University of Manchester. (2025). EBSD analysis: Standard operating procedures and workflow documentation. https://lightform-group.github.io/wiki/
MTEX Development Team. (2024). MTEX 6.0 release notes: Multiple spatial reference systems and how2plot convention. https://github.com/mtex-toolbox/mtex/releases/tag/mtex-6.0
MTEX GitHub Issue Tracker. (2024). Issues 478 and 479: CTF round-trip failure on AZtecCrystal reimport. https://github.com/mtex-toolbox/mtex/issues/478
Muiruri, A.M., Maringa, M., & du Preez, W.B. (2022). Microstructural study of rapidly solidified and heat-treated Ti-6Al-4V alloy produced by laser powder bed fusion. Applied Sciences, 12(19), 9552. https://doi.org/10.3390/app12199552
Niessen, R., Nyyssönen, T., Gazder, A.A., Hielscher, R., & Britton, T.B. (2022). Parent grain reconstruction from partially or fully transformed microstructures in MTEX. Journal of Applied Crystallography, 55(1), 180-194. https://doi.org/10.1107/S1600576721011560
Oxford Instruments. (2018). EBSD data processing: Why cleaning matters. Oxford Instruments EBSD Blog. https://www.ebsd.com/ebsd-explained/data-processing
In this story
Scientific instrument software fails its users not because the domain is too complex for interface design, but because eight structural failure themes repeat across vendors, software generations, and user populations. We present a research-based taxonomy built from peer-reviewed literature, institutional SOPs, GitHub issue trackers, and forum analysis, with direct implications for product teams.



