Back to Main Report

5.1 User Interface (GUI)

The GUI is the primary training interface of BoxBunny. It turns user intent into session choices, workout control, progress tracking, and feedback that boxers can act on during training. Users interact with the interface via the onboard touchscreen, and the system also supports physical button navigation through the punching contact points and IMU sensors, allowing hands-free control during active training. Every design decision was filtered through a single principle: the interface must never become the obstacle. If a user pauses mid-session to figure out how to navigate, the training flow breaks.

The GUI therefore sits at the product level as well as the technical level. The pages below show how boxer needs, training goals, and practical gym constraints shaped the interface, then how those choices were built, integrated, and tested.

Product Story Flow The GUI is shaped by training needs first, then translated into interface behavior. Boxer needs Start quickly, stay oriented, review progress GUI requirements Touch, buttons, isolation, live metrics Training session Drill setup, active round, rest, review Product feedback Scores, coaching, history, next-session decisions

Requirements and Considerations

This subsystem addresses DO-1 (Performance Analytics), DO-3 (Skill Progression Studio), and DO-4 (Adaptive Fight Intelligence). See Section 5 for the full Design Objectives reference.

The GUI requirements were derived from the Function Analysis (Table 6, Section 3.1), the user journey defined in Section 4.2, and the practical constraint that the GUI subsystem was developed in parallel with all hardware subsystems.

ID Requirement Source
GUI-1 Touchscreen and physical button operable: all interactive elements ≥60 px, minimal text input, IMU-based button navigation User journey (Section 4.2), Function Spec display latency, Appendix 6 mapping
GUI-2 Multi-user accounts with complete data isolation between users Product need: Performance Analytics (Appendix 6)
GUI-3 Structured training progression through a 50-combo curriculum with mastery-based advancement Product need: Skill Progression (Appendix 6)
GUI-4 Real-time session data display (combo prompts, round timers, performance metrics) Function Spec: on-screen dashboard, brightness requirements, Appendix 6 mapping
GUI-5 Hardware-independent development via an abstracted integration layer with mock interfaces Parallel development constraint, Appendix 6 mapping

Table 5.1-1: GUI Requirements and Sources

Touch target minimums for finger-operated interfaces are established in platform human interface guidelines (Apple Inc., 2023; Google LLC, 2023).

System Design Narrative

Following the Systems Engineering V-Model in Section 3.2, GUI-1 to GUI-5 were fixed before detailed screen design (INCOSE, 2023). These requirements trace back to product needs mapped in Appendix 6. The diagram below applies the same V-Model structure used in Robot Mechanism, but for GUI engineering decisions.

In product terms, the GUI had to feel like training support rather than a control panel. Boxers needed quick drill access, clear round state, simple navigation under fatigue, and a path from session start to progress review without unnecessary friction.

BoxBunny GUI - Systems Engineering V-Model DESIGN DECOMPOSITION VERIFICATION AND VALIDATION 01 GUI Problem Framing GUI-1 to GUI-5 fixed 02 Flow Architecture Navigation and page contracts 03 Feature Decomposition Training, Performance, Sparring Others, Coach Station GUI Integration Build Unified session workflow Stable event contracts 04 Module Verification Flow and state transition tests Account isolation checks 05 System Interaction Test GUI event-response consistency 06 User Acceptance Gloved-use and session trials Design decomposition Verification and validation Integration build stage Requirement to verification correspondence Figure: GUI Systems Engineering V-Model showing requirement decomposition (left), integration build (centre), and verification closure (right) for GUI-1 to GUI-5.

Left Side of the V: Design Decomposition

Decomposition was performed by interaction outcome, not by isolated page. GUI-1 drove operability via padded controls and hands-free navigation. GUI-2 and GUI-3 drove account-scoped progression data. GUI-4 drove live round-state presentation. GUI-5 drove stable integration contracts so interface behavior could be validated while connected subsystems were still maturing.

Base of the V: Integration Build

At integration stage, the modules were assembled into one operator workflow: login, mode selection, session execution, and review. Navigation, timer behavior, and results patterns were standardized across modes so interaction remained predictable under training pace.

Right Side of the V: Verification Closure

Verification was planned from the same requirements used during decomposition. Functional checks covered account isolation, progression unlock rules, session-state transitions, and navigation stack correctness. User trials validated gloved operability and transition speed during active training. Together, these checks close the requirement-to-verification loop for GUI-1 to GUI-5 (INCOSE, 2023).

V-Model Stage GUI Focus Applied to BoxBunny GUI Verification Evidence
Concept and Requirements Define user-operable interface behaviour GUI-1 to GUI-5 derived from user journey and function specifications Requirement table and traceability mapping
High-Level Design Partition features into coherent modules Training, Performance, Sparring, Others, and Coach Station flows with shared navigation patterns Screen-flow checks and navigation path tests
Detailed Design and Build Assemble interface logic with stable subsystem contracts Session setup, round HUD, results pages, and account-scoped history Mode-by-mode functional testing
Verification and Validation Confirm performance against GUI requirements Multi-user isolation checks, progression rule checks, and active-session usability checks Validation summary and testing records

Table 5.1-2: GUI application of the V-Model from decomposition to verification

Training Modes

The GUI organises training into four modes, each addressing a different objective identified during needs finding. A fifth tab covers system configuration. Screenshots of each mode's selection interface are available in Appendix 3.

Training Performance Sparring Others Coach Station

Combo Curriculum. 50 punch combinations organised across Beginner (15), Intermediate (20), and Advanced (15) tiers. Each combo must be completed across five sessions with an average score of 3.0 or above before it is marked as mastered and the next combo unlocks. Users can also build custom sequences through the Self-Select mode. A configurable session setup lets users choose round count, work time, rest time, and playback speed before starting. Post-session, an AI chat page offers feedback using a local language model, falling back to hardcoded responses if the model is unavailable.

Power, Stamina, and Reaction Time tests. The Power test reads accelerometer data from the onboard IMU sensor to measure peak and average punch force. The Stamina test runs a two-minute session and tracks total punches, punch rate, and fatigue percentage. The Reaction test uses the computer vision subsystem to measure response time. All results are stored per-user in SQLite and viewable through a tabbed history page.

Markov chain combo generation. Sparring mode generates punch sequences procedurally, weighted by the user's weakness profile so that the robot targets areas where the user needs the most practice. Five AI opponent styles are available: Boxer (balanced, technical), Brawler (aggressive, power-heavy), Counter-Puncher (reactive, waits for the user to commit), Pressure (relentless, minimal rest), and Switch (cycles between styles unpredictably). This mode is available to all proficiency levels and is designed for users who want less predictable, more reactive training compared to the structured combo curriculum.

Free Training. An open, reactive session with no structured drills. The robot responds to user pad hits with contextually appropriate counter-strikes, creating a dynamic back-and-forth. After 5 seconds of inactivity, the robot returns to its guard position. This mode is designed for users who want unstructured practice without scoring or progression.

Settings and configuration. The Others page provides access to system settings including the AI chat toggle, computer vision toggle, and performance test entry points. It also houses the user management page where accounts can be viewed and deleted.

The Others page also provides access to a pattern lock setup (3x3 grid authentication designed for reliable padding-based interaction) and a QR code for connecting a phone to the robot's WiFi-hosted companion dashboard.

Group circuit training. Coach Station allows a coach to manage multiple participants rotating through the boxing station. The coach sets a participant count (up to 30), optionally selects a training preset, and starts the session. Each participant begins their turn by hitting the centre pad or tapping the on-screen GO button. The system auto-advances to the next participant after the configured work timer expires. Session results are stored per-participant for group review. The coach dashboard layout is shown in Appendix 3.

Design Summary

The GUI was built in PySide6 (Qt for Python) targeting an NVIDIA Jetson Orin NX running Ubuntu 20.04 LTS with ROS 2 Humble. The original display was a 7-inch touchscreen at 1024x600, later upgraded to a 10.1-inch capacitive display at 1280x800 following user testing observations. The application uses a five-layer architecture separating presentation, application logic, business logic, integration, and data concerns. The integration layer exchanges standard session events and control commands with the robot intelligence subsystem, enabling real-time updates for drill progress, timer state, and coaching prompts from the user's perspective. A phone-accessible companion dashboard (Vue 3 web app) shares the same user database and serves over the robot's WiFi access point, allowing users to view analytics, start training remotely, and chat with the AI coach from their phone. Backend implementation details are documented in Section 5.3. The system was built over six iterations from December 2025 to April 2026.

Interface Walkthrough

The video below demonstrates the complete user flow: account creation, proficiency assessment, technique drills with session configuration, sparring mode, free training, performance tests, session history, and settings.

Video: Complete GUI navigation walkthrough (2 min)

Validation Summary

All five GUI requirements were verified through structured functional testing after the sixth iteration. Multi-user data isolation was confirmed across five test accounts with zero observed cross-contamination. The combo curriculum progression algorithm was validated through multiple complete training cycles. The navigation stack was tested across arbitrary navigation depths and confirmed correct reverse-order traversal. Full test results and user testing observations are documented in the Testing and Evaluation sub-page.

Integration with Robot Intelligence

The GUI is the user-facing layer of a larger system. The robot intelligence subsystem (Section 5.3) handles sensing, scoring, and AI inference. The GUI presents the outputs of that processing to the user and sends user intent back into the system. Neither side depends on the internal details of the other.

From the user's perspective, this integration is what makes the interface feel live. When a punch lands on a pad, the combo display advances automatically. When a round ends, the rest screen appears without any user action. When the AI coach generates a tip, it slides in at the top of the screen. All of this happens because the GUI listens for events from the backend and reacts to them in real time.

The flow works in two directions:

GUI and Robot Intelligence Relationship 5.1 User Interface User taps Start User hits pads during a session User opens results or coach feedback GUI updates the screen in response 5.3 Robot Intelligence Detects punches and session state Returns round updates and results Delivers coaching tips and feedback Keeps the training flow synchronized User intent flows to the backend Responses flow back to the GUI

Figure: Bidirectional relationship between the GUI and Robot Intelligence

Engineering and Design Readiness

This GUI implementation demonstrates a complete engineering approach: translating product needs into requirements, decomposing architecture, implementing a multi-mode production interface, integrating with robot intelligence through stable contracts, and closing the loop with verification evidence. The subsystem is not only functional, but auditable in terms of design decisions, iteration history, and measured outcomes.

Engineering
Layered architecture with integration boundaries
GUI behavior remains stable while hardware and backend matured in parallel.
Design
User-centred flow under real training constraints
Touch targets, IMU navigation, and readability revisions were grounded in user observation.
Innovation
Product-level integration across GUI, CV, and coaching
The subsystem supports real-time training, progression, and AI-assisted feedback in one workflow.

Detailed Documentation

Next: Design and Ideation

References

Cross-References