Toshiki Matsukuma
Software Engineer
Profile
Design to prevent rework. Code to ship. Full-stack engineer with 7+ years in web development. Former frontend tech lead of a 10-person team. Raised in Tokyo, based in Bangkok. Vegetarian. I take lunchtime walks in the park for some sunshine. Love travel and beaches. Always learning something new — math, accounting, law, philosophy, you name it.
Skills
Frontend
Backend
Database
DevOps
Testing
Work Experience 11 projects
Backend Engineer (Microservice Design & Implementation)
5 membersAI Translation SaaS Startup•2025/04 — 2025/09BackendInfraTestingPost-translation processing microservice suite built with FastAPI + Celery + PostgreSQL + Redis. Designed and implemented the post-validation service, modernized the frontend development environment, built Docker/GHCR deployment infrastructure, implemented OpenAPI mock auto-generation, and established E2E testing — comprehensively improving the development platform
13 tasks
Backend Engineer (Microservice Design & Implementation)
5 membersPost-translation processing microservice suite built with FastAPI + Celery + PostgreSQL + Redis. Designed and implemented the post-validation service, modernized the frontend development environment, built Docker/GHCR deployment infrastructure, implemented OpenAPI mock auto-generation, and established E2E testing — comprehensively improving the development platform
State Machine Design for Post-Translation Processing and Fault-Tolerant Task InfrastructureExtreme
Managed post-translation quality checks and re-translation processes using a 9-state state machine, recording all step results as immutable data in the database. This enabled root cause analysis of translation accuracy issues using SQL alone, with processing status queryable via API in real time. Each step was implemented as an idempotent Celery task, with fault-tolerant design that can restore queue information from the database and resume processing after container failures
Contributions
- •Designed and implemented a loosely coupled, maintainable architecture with complete separation of state transition logic and business logic
Decisions (3)
State machine control for translation validation process
Designed a 9-state state machine to control the translation check to re-translation loop. By separating state transition logic from business logic, achieved a loosely coupled structure where changes to branching conditions do not affect other steps
Immutable schema design prioritizing observability
Adopted an approach of recording all step results as immutable data in the database. When translation accuracy issues occur, root causes can be analyzed via SQL, and the data can be directly leveraged for future AI model improvements. Processing status can be determined with a simple DB SELECT, reducing the burden on both developers for operational verification and business stakeholders for translation quality checks
Fault-tolerant design with idempotent Celery tasks
Implemented each state transition step as an idempotent Celery task. With exponential backoff + jitter retry configuration, API polling portions can also be safely retried. Even if a container crashes and the Redis queue is lost, processing can be resumed by restoring queue information from the database state
Outcomes (1)
Before: Post-translation processing status was a black box, with log inspection being the only way to verify operations
After: Processing status became queryable via a single API. Translation accuracy issues can now be root-cause analyzed with SQL, reducing verification overhead for both developers and business stakeholders
Improved observability and fault recovery capability
Challenges (2)
Designing processing recovery after container failures
Since Redis queues are volatile, built a mechanism to restore correct queue information from database state upon container restart. By designing each task to be idempotent, processing can safely resume from where it left off
Separating state transition logic from business logic
Managed state transition branching conditions and business logic within each step as completely separate modules. Achieved a loosely coupled design where modifications to either side do not affect the other, ensuring maintainability
Regression Prevention Manual Test Specifications and AI-Assisted Testing StrategyMedium
Created manual test specifications for regression prevention in preparation for post-initial-release refactoring. Honestly evaluated the reliability limitations of generative AI (agent-browser) non-deterministic behavior, and formulated a phased migration strategy from manual testing to E2E to component tests. Made the practical decision to limit OnlyOffice editor (Canvas implementation) tightly-coupled sections to manual testing only
Contributions
- •Evaluated the reliability limitations of generative AI non-deterministic behavior and formulated a phased test automation strategy (manual testing to E2E to component tests)
Decisions (2)
Testing strategy informed by generative AI (agent-browser) non-deterministic behavior
Rather than having AI perform unstructured manual testing, formulated a phased test automation approach: (1) systematize manual testing with test specifications first, (2) migrate to deterministic E2E and component tests, (3) leverage agent-browser solely for generating deterministic E2E test code rather than manual testing
Testing strategy for OnlyOffice editor (Canvas implementation) tightly-coupled UI
Determined that faking the OnlyOffice editor would reduce test effectiveness. Since E2E tests relying on Canvas DOM relative positioning tend to be unstable, honestly decided to limit OnlyOffice tightly-coupled sections to manual testing only
Outcomes (1)
Before: No regression prevention measures existed during refactoring, and no testing strategy had been formulated
After: Created manual test specifications to systematize manual testing. After honestly evaluating generative AI reliability limitations, formulated a phased migration strategy from manual testing to E2E to component tests. Documented the practical decision to limit OnlyOffice tightly-coupled sections to manual testing
Systematization of testing strategy and quality assurance framework
Challenges (1)
Determining the limits of test automation for UI tightly coupled with Canvas-based OnlyOffice editor
Clearly separated test targets into 'automatable areas' and 'areas requiring manual testing.' OnlyOffice tightly-coupled sections are covered by manual test specifications, while other UI and API logic are automated with E2E and component tests
Glossary Design and Clean Architecture for Translation ValidationHigh
Designed the domain model, glossary data structure, and clean architecture (UseCase/Repository/Domain separation) for the entire translation validation feature. Ensured process observability through schema design that persists generative AI decision-making to the database
Contributions
- •Designed the entire translation validation logic using clean architecture with UseCase/Repository/Domain separation, facilitating task delegation among team members
Decisions (2)
Adoption of 3-layer UseCase/Repository/Domain architecture
Separated translation validation logic into three layers: UseCase (business flow control), Repository (data access abstraction), and Domain (domain model/validation). By confining generative AI calls to the UseCase layer, limited the impact scope when changing AI models
Schema design for persisting generative AI decisions to database
Adopted a design that persists all decisions made by generative AI at each validation step (translation quality scores, re-translation necessity, terminology correction suggestions) as database records. Established a data accumulation foundation for future AI model accuracy comparisons and prompt improvements
Outcomes (1)
Before: Translation validation logic had no design, with no criteria for task delegation within the team
After: The 3-layer architecture clarified responsibilities of each layer. Established a structure enabling team members to develop Repository and UseCase layers in parallel, with design documentation serving as the basis for task delegation
Established design foundation enabling parallel development across a 4-person team
Challenges (1)
Incorporating generative AI non-deterministic output into the domain model
Defined AI outputs as 'judgment results' with type definitions, designed a flow to validate in the Domain layer before persisting to the database. Structured so that changes in AI output format can be absorbed by Domain layer validation
Near-Exact Match Algorithm for Glossary Usage Example SearchHigh
Implemented an algorithm for glossary usage example search that tolerates notation variations, particle differences, and punctuation discrepancies while returning semantically accurate matches. Solved the problem where full-text search lacked precision and exact match produced excessive misses
Contributions
- •Designed a 'near-exact match' search logic positioned between full-text search and exact match, achieving high-precision terminology search while tolerating notation variations
Decisions (1)
Design of a 'near-exact match' approach — neither full-text search nor exact match
PostgreSQL full-text search (tsvector) produced excessive hits due to Japanese particle and punctuation differences, while exact match caused frequent search misses due to notation variations. Designed an intermediate approach that applies normalization (punctuation removal, whitespace standardization, particle pattern tolerance) before string comparison, balancing precision and recall
Outcomes (1)
Before: Full-text search returned results unrelated to translation terms, while exact match could not find target usage examples due to notation variations
After: The near-exact match algorithm significantly improved glossary usability by returning only semantically accurate usage examples while tolerating notation variations, particle differences, and punctuation discrepancies
Improved glossary search accuracy (simultaneous reduction of false positives and improvement of recall)
Challenges (1)
Systematizing Japanese text notation variation patterns
Collected and classified frequently occurring notation variation patterns from translation target documents (mixed punctuation styles, particle interchanges, mixed full-width/half-width characters). Implemented as normalization rules and comprehensively verified with test cases
Parallelizing Serial Network I/O in Celery Tasks with asyncioHigh
Changed serial network I/O calls to multiple external services (translation API, glossary API, etc.) to concurrent execution using an asyncio event loop. Established a pattern for safely integrating asyncio with Celery's synchronous worker model, improving latency and throughput
Contributions
- •Established a pattern for safely launching an asyncio event loop within Celery synchronous workers, parallelizing previously serial external API calls
Decisions (1)
Adopting asyncio event loop integration pattern within Celery synchronous workers
Adopted a pattern of launching an event loop via asyncio.run() within tasks while maintaining Celery's synchronous worker (prefork) model. The alternative of converting Celery itself to async workers was rejected due to high ecosystem compatibility risks, and ThreadPoolExecutor was rejected because threads would be wastefully occupied during I/O waits
Outcomes (1)
Before: Calls to translation API and glossary API were executed serially, with each request waiting sequentially for 3 external APIs, resulting in long per-request processing times
After: Changed external API calls to concurrent execution with asyncio.gather. Reduced processing time from the sum of all API response times (serial execution) to the slowest API's response time
Reduced latency and improved throughput for external API call portions
Challenges (1)
Coexistence of Celery's synchronous execution model with asyncio
Since Celery's prefork workers are process-based, adopted an approach of creating and disposing of a new asyncio event loop within each task. By limiting the event loop lifecycle to the task scope, eliminated interference between workers
Optimizing Text Matching Algorithm for Content Control AssignmentExtreme
Optimized the matching algorithm between source text and document structure to accurately assign markers to translation target locations within documents. Achieved both matching accuracy and performance for large-scale documents
Contributions
- •Optimized the search algorithm to maintain accuracy while achieving practical processing speed for large-scale documents
Decisions (1)
Reducing search space through a staged matching strategy
Adopted a strategy of sequentially executing matching between source text and document structure (paragraphs, cells, list items) in three stages: exact match, normalized match, and partial match. By excluding locations confirmed in earlier stages from the search space, reduced computational complexity while maintaining accuracy
Outcomes (1)
Before: Matching processing was slow for large documents (100+ pages), with accuracy issues in content control assignment
After: The staged matching strategy achieved practical processing speed even for large documents. Matching accuracy also improved, enhancing the reliability of marker assignment to translation target locations
Improved matching speed and accuracy for large-scale documents
Challenges (1)
Trade-off between document structure segmentation granularity and matching accuracy
Adjusted text segmentation granularity per Word document internal structure type (paragraphs, table cells, list items, headers/footers). Resolved the issue where too-fine segmentation increases matching candidates and slows processing while too-coarse segmentation reduces partial match accuracy, by implementing element-type-specific segmentation rules
Modernizing the Frontend Development EnvironmentHigh
Introduced Vite, Vitest, Storybook, Biome, and Playwright to the existing frontend in a single initiative, overhauling the development experience and code quality foundation. Improved build speed and established a toolchain for unit testing, UI catalog, linter/formatter, and E2E testing
Contributions
- •Introduced 5 tools (Vite/Vitest/Storybook/Biome/Playwright) and built the testing, quality management, and UI catalog infrastructure from scratch
Decisions (2)
Selecting Vite + Biome (migrating from webpack + ESLint/Prettier)
Migrated the existing webpack-based build environment to Vite and consolidated ESLint+Prettier into Biome. Significantly improved development iteration speed through Vite's hot reload performance and Biome's fast lint/format capabilities
Unifying Node.js version management with Volta
Resolved the issue of sporadic build errors caused by Node.js version mismatches across the team by introducing Volta. Pinned the version at the project root to eliminate environment discrepancies between members
Outcomes (1)
Before: No unit tests, UI catalog, linter, or E2E tests existed — there were no objective means to verify code quality
After: Comprehensively introduced 5 integrated tools: Vite (build), Vitest (unit testing), Storybook (UI catalog), Biome (lint/format), and Playwright (E2E), completely overhauling the development foundation
Establishment of testing and quality management infrastructure (built from zero)
Challenges (1)
Coexistence of existing PHP-based project with Vite
Designed a hybrid configuration managing only the React portions with Vite without disrupting the existing PHP+jQuery environment. Structured Vite's build output to be loaded from PHP templates, enabling incremental migration
Auto-Generating Frontend Mocks from Python Backend OpenAPI SpecificationHigh
Built a pipeline that uses the OpenAPI specification auto-generated by FastAPI as the source to auto-generate TypeScript type definitions, API clients, and mock handlers via MSW (Mock Service Worker) and Orval. Eliminated the need for frontend development to wait for backend implementation
Contributions
- •Built a pipeline that auto-generates type definitions, API clients, and mocks from OpenAPI specification, eliminating frontend dependency on backend
Decisions (1)
Mock auto-generation pipeline using OpenAPI specification as Single Source of Truth
Designed a pipeline using the OpenAPI specification auto-generated by FastAPI as the sole source of truth, generating TypeScript type definitions and API clients via Orval and mock handlers via MSW. Since manual mock writing makes it difficult to keep up with API changes, auto-generation from the specification simultaneously ensures type safety and mock freshness
Outcomes (1)
Before: Frontend development had to wait for backend API implementation to complete, preventing parallel development
After: By auto-generating mocks from OpenAPI specification, frontend development can begin as soon as the backend API definition is finalized. Mock APIs also work on Storybook, enabling UI verification without the backend
Established development parallelism between frontend and backend
Challenges (1)
Maintaining type consistency between OpenAPI schema and Orval/MSW
Automated regeneration from OpenAPI schema in CI, building a mechanism where frontend type definitions and mocks automatically track backend API changes. Type inconsistencies are immediately detected as TypeScript compilation errors
Building Pull-Based Deployment Infrastructure with Docker Compose + GHCRHigh
Created automation scripts for the entire flow of Docker Compose build, push to GHCR, and pull-based deployment on the production server. Set up GHCR image management, visibility settings, permission configuration, and cron-based pull deployment on the production server
Contributions
- •Migrated from manual SSH+SCP deployment to Docker Compose+GHCR pull-based deployment, establishing a reproducible deployment flow
Decisions (2)
Adopting GHCR pull-based deployment (migrating from manual SSH+SCP approach)
Migrated from a deployment approach where developers SSH into the server and SCP upload files, to a pull-based deployment where images are pushed to GHCR and the production server pulls them via cron. Ensured deployment reproducibility with instant rollback capability through image tag switching
GHCR visibility, permissions, and image management design
Managed image visibility at the Organization level in GitHub Container Registry, set up required permissions for production server pulls (Personal Access Token + read:packages scope), and designed image tag naming conventions
Outcomes (1)
Before: Deployment relied on manual SSH+SCP, was person-dependent, with risk of incidents from procedural errors. No rollback mechanism existed
After: Docker Compose+GHCR pull-based deployment ensures reproducibility. Cron-based automatic pulling and image tag management enable easy rollback
Deployment automation and reproducibility establishment
Challenges (1)
Ensuring reliability of cron-based pull deployment
Incorporated health checks, image diff detection, and rollback functionality into the pull script, designed to maintain existing containers when new image pulls fail
Configuring OnlyOffice Server TLS/CORS Settings via Docker Environment VariablesMedium
Configured OnlyOffice document server TLS certificate and CORS origin settings to be controllable via environment variables through Docker startup script injection. Simplified per-environment configuration switching
Contributions
- •Adopted an approach of controlling settings via environment variables through startup script injection rather than directly editing OnlyOffice configuration files
Decisions (1)
Externalizing OnlyOffice configuration via script injection
Since directly mounting OnlyOffice configuration files causes compatibility issues during version upgrades, adopted an approach of dynamically generating configuration files from environment variables via an entrypoint script at Docker startup. Made TLS certificate paths and CORS origins switchable via environment variables
Outcomes (1)
Before: OnlyOffice TLS/CORS settings were hardcoded in configuration files, requiring manual editing when switching environments
After: Enabled control of TLS certificate paths and CORS origins via Docker environment variables, automating configuration switching across development, staging, and production environments
Automated environment switching and externalized configuration
Documenting Local/Remote Hybrid E2E Development Environment SetupMedium
Prepared reproducible documentation for building an E2E development environment connecting local React+Python with remote server PHP. Created step-by-step guides including Docker Compose, network configuration, and environment variable management to streamline new member onboarding
Contributions
- •Documented the reproducible setup procedure for a local/remote hybrid environment, reducing new member environment setup overhead
Decisions (1)
Standardizing environment setup procedures via Docker Compose integration
Consolidated Docker Compose network configuration, environment variable templates, and connection verification procedures for connecting local React+Python containers with remote server PHP+OnlyOffice into a single document. Aimed to enable new members to reproduce the E2E environment by following the documentation
Outcomes (1)
Before: Environment setup procedures were passed down verbally and person-dependent, taking 1-2 days for new members to set up their environment
After: Reproducible documentation eliminated person-dependency in environment setup procedures. Prepared step-by-step documentation including Docker Compose, network configuration, and environment variables
Improved new member onboarding efficiency
Building Playwright E2E Test Environment and Implementing Test ScenariosHigh
Built an E2E test environment with Playwright covering the entire translation workflow across React/Python/OnlyOffice. Since OnlyOffice Canvas elements have E2E testing limitations, clearly delineated automatable scope from manual testing scope
Contributions
- •Implemented regression test scenarios for the entire translation workflow, clearly delineating automatable scope from manual testing scope
Decisions (1)
Clear boundary between automatable and manual testing areas
Since stable E2E testing of operations dependent on OnlyOffice editor Canvas elements is difficult with Playwright, divided test targets into 'translation workflow operation flows and API integration' and 'document operations within OnlyOffice,' with only the former covered by E2E tests
Outcomes (1)
Before: No E2E test environment existed, with regression testing during feature additions and refactoring being manual only
After: Implemented regression test scenarios for the entire translation workflow (file upload, translation execution, result verification) with Playwright. Enabled automated integration testing of React+Python+PostgreSQL within a Docker environment
Automated regression testing via E2E tests (covering automatable areas)
Challenges (1)
Building a test environment integrating 3 services: React+Python+OnlyOffice
Designed a network configuration where Playwright's test runner can access all 3 services launched via Docker Compose. Managed test data initialization and cleanup as fixtures to ensure test independence
Building Development Efficiency Dashboard, Log Aggregation MCP, and Story Generation AgentMedium
Created a dashboard visualizing Celery task execution status and translation validation success/failure rates. Also built an MCP server for searching distributed environment logs from Claude Code, and a sub-agent for auto-generating Storybook Stories from components, improving development efficiency
Contributions
- •Built a development support infrastructure leveraging AI tools, including an MCP server for log search and a Story auto-generation agent
Decisions (2)
Claude Code integration of distributed logs via MCP server
Built an MCP server enabling cross-container log search from Claude Code across React, Python, and Celery containers. Previously, logs had to be checked individually via docker logs + grep, but MCP tooling enabled log search and filtering from within Claude Code conversations
Storybook Story auto-generation via agent-browser
Built a sub-agent that auto-generates Storybook Stories from existing React components. agent-browser analyzes component implementations and auto-generates Story files covering comprehensive props and state patterns, accelerating UI catalog development
Outcomes (1)
Before: Checking distributed container logs required manual docker logs + grep execution, making incident investigation time-consuming. Storybook Stories also had to be manually created for each UI component
After: MCP server enables cross-container log search from Claude Code. A dashboard visualizing Celery task execution status and translation validation success/failure rates was also created, improving incident investigation and quality monitoring efficiency
Improved incident investigation and development efficiency, accelerated UI catalog development
Technical Research & Documentation
2 membersMid-Sized Online Learning Platform Company•2025-05 — 2025-07ConsultingQuantified code quality of a VBScript/Oracle-based legacy system using SonarQube, and supported strategic decision-making with a 3-option comparison matrix across 7 evaluation axes (modification/ERP adoption/browser extension). Also built a RAG-based internal document search platform using NotebookLM+markitdown. Delivered results in approximately 2 months of short-term consulting by leveraging generative AI tools
3 tasks
Technical Research & Documentation
2 membersQuantified code quality of a VBScript/Oracle-based legacy system using SonarQube, and supported strategic decision-making with a 3-option comparison matrix across 7 evaluation axes (modification/ERP adoption/browser extension). Also built a RAG-based internal document search platform using NotebookLM+markitdown. Delivered results in approximately 2 months of short-term consulting by leveraging generative AI tools
Building a RAG-Based Internal Document Search PlatformMedium
Converted internal documents to Markdown with markitdown and built a RAG search environment using NotebookLM. Integrated Claude Desktop + SonarQube via MCP to streamline extraction and formatting of quality issue summaries.
Contributions
- •Designed the RAG search platform architecture using NotebookLM+markitdown and built the document conversion pipeline
- •Proposed a low-cost, fast-delivery approach by leveraging existing SaaS (NotebookLM) instead of building a custom RAG solution
Decisions (1)
Low-effort RAG construction using NotebookLM + markitdown
Adopted an approach of leveraging Google's NotebookLM by converting internal documents (PDF/Word/Excel) to text with markitdown and ingesting them. Minimized costs by utilizing existing SaaS rather than building a custom RAG system
Outcomes (1)
Before: Internal documents were scattered across departmental file servers and cloud storage with no cross-search capability. Finding needed information was time-consuming
After: Built a RAG search platform using NotebookLM + markitdown. Converted internal documents to Markdown and enabled natural language cross-search. Achieved a practical search environment within approximately 2 weeks of consulting
Improved internal document search efficiency
Challenges (1)
Converting diverse file format internal documents to RAG-searchable format
Used markitdown to convert PDF/Word/Excel to Markdown format. Built a conversion pipeline that preserves structural information (headings, tables, lists) as much as possible. Manually quality-checked converted Markdown and made corrections as needed before ingesting into NotebookLM
Legacy System Code Structure Analysis and Extensibility AssessmentMedium
Performed static analysis of a VBScript/Oracle-based legacy system using Cursor/SonarQube/Claude Desktop. Analyzed extensibility, modification difficulty, and dependencies, then organized and compared options including ERP adoption, existing code modification, and browser extension.
Contributions
- •Conducted static analysis with SonarQube and created a module-level modification risk assessment report
- •Provided the rationale for adopting the browser extension approach through quantitative data-driven modification risk visualization
Decisions (2)
Quantifying legacy code quality with SonarQube
Used SonarQube for static analysis of the entire codebase, quantitatively measuring bugs, code smells, duplication rate, and test coverage. Prepared data enabling objective assessment of modification risks
Proposing modification priorities based on quantitative data
Organized static analysis results by module, mapping high-risk areas and their impact scopes. Proposed modification priorities based on quantitative evidence
Outcomes (1)
Before: No objective code quality evaluation existed, and modification risks were unclear
After: Quantitatively evaluated code quality through SonarQube analysis. Identified high-risk modification areas and visualized the overall technical debt landscape
Objectified modification risk through quantitative evaluation. Utilized as rationale for adopting the browser extension approach
Challenges (1)
Conducting investigation in a legacy environment with no version control or tests
Quantified quality through SonarQube static analysis and conducted investigation without impacting the production environment by connecting to an Oracle DB read-only replica. Compiled analysis results into slide-format reports to visualize technical risks for management
Short-Term Extension PoC Design and Executive Decision Support DocumentationMedium
Created proposal documentation leveraging Mermaid notation extensively for flow diagrams and structural diagrams. Accelerated the documentation process through rapid iterations using generative AI tools including Genspark, Gamma, and Canva.
Contributions
- •Designed a 3-option comparison matrix with 7 evaluation axes and a decision tree, and developed a PoC architecture for a React browser extension
- •Categorized requirements from 6 departments into 3 tiers: 'addressable via extension,' 'requires code modification,' and 'waiting for ERP,' presenting each department with a realization outlook
Decisions (2)
Strategic selection support via 3-option comparison framework
Created a matrix comparing 3 options across 7 evaluation axes (development risk, cost, timeline, quality assurance, operational impact, extensibility, ROI), and visualized the decision flow in a decision tree format
Proposing a low-risk improvement approach via React browser extension
Proposed an approach of overlaying a React UI onto legacy screens as a Chrome extension. Designed a PoC implementing features such as hierarchical discount master dropdowns on the frontend without modifying the existing database or backend logic
Outcomes (1)
Before: No decision criteria existed for multiple extension approaches (modification/ERP/extension), preventing executive decision-making
After: Supported strategic selection with a 3-option comparison matrix + decision tree. Designed a React browser extension PoC architecture (Lambda+S3+IndexedDB+Chrome Extension) and presented a concrete implementation plan using the discount criteria flexibility use case
Browser extension approved as a short-term measure. ERP adoption moved to a separate 2-3 year medium-to-long-term budget planning process
Challenges (1)
Organizing requirements from 6 departments and categorizing feasibility
Consolidated all requirements from interviews into Excel, categorized into 3 tiers: 'addressable via browser extension,' 'requires existing code modification,' and 'waiting for ERP.' Displayed priorities with star ratings, visualizing when each department's requirements would be realized
Development Organization Advisor
2 membersAI Translation SaaS Startup (Japan)•2025-04 — 2025-07ConsultingConducted structural analysis of QCD (Quality, Cost, Delivery) challenges in a ~30-person development organization as an external advisor. Structured 100 hypotheses using MECE x Issue Tree methodology, objectified initiative priorities with 5-axis weighted scoring, and created a 6-phase execution roadmap and executive proposal materials. Secured COO approval at the executive meeting
3 tasks
Development Organization Advisor
2 membersConducted structural analysis of QCD (Quality, Cost, Delivery) challenges in a ~30-person development organization as an external advisor. Structured 100 hypotheses using MECE x Issue Tree methodology, objectified initiative priorities with 5-axis weighted scoring, and created a 6-phase execution roadmap and executive proposal materials. Secured COO approval at the executive meeting
Structural Analysis of Development Organization QCD ChallengesHigh
Investigated the root causes of declining development velocity and quality, organizing technical and organizational challenges. Deep-dived into structural issues of the engineering organization including code management, review processes, release procedures, and knowledge silos.
Contributions
- •Structured 100 hypotheses using MECE x Issue Tree and extracted 8 major challenges via 5-axis scoring. Described organizational-political challenges neutrally as systemic issues
Decisions (2)
MECE x Issue Tree approach for structuring 100 hypotheses
Combined MECE (Mutually Exclusive, Collectively Exhaustive) with Issue Tree methodology, comprehensively enumerating and evaluating 100 hypotheses using 5-axis quantitative scoring (release velocity contribution, bug rate contribution, execution feasibility, measurability, lead time)
Priority ordering of 8 major challenges with organizational structure mapping
Based on contribution to the top-priority issue 'declining delivery of existing applications,' deep-dived into 3 challenges directly tied to implementers and organized the remaining 5 by stakeholder. Reordered all 8 challenges by importance
Outcomes (1)
Before: Challenges were fragmented with no overall picture. Interview results were subjective and insufficient for priority decisions
After: Structured 100 hypotheses using MECE x Issue Tree and extracted 8 major challenges via 5-axis scoring. Built a challenge map usable for executive decision-making
Completed structuring from 100 hypotheses to 8 major challenges. Achieved quantitative evaluation with top hypothesis score 4.35 (highest) to 1.9 (lowest)
Challenges (2)
Structuring challenges amid insufficient quantitative data
Rather than relying on quantitative data, adopted an approach of structuring hypotheses via MECE x Issue Tree and performing relative evaluation through 5-axis scoring. Built a proprietary framework for converting interview content into 'challenge weights'
Neutral description of organizational-political challenges
Described issues without naming individuals, framing them as systemic challenges such as 'decision-making structure' and 'unclear approval authority.' Solutions were proposed as institutional design rather than personal criticism
Building QCD Improvement Initiative Evaluation Framework and Priority MatrixHigh
Researched and organized improvement initiatives along Quality, Cost, and Delivery axes. Conducted quantitative 'impact x feasibility' evaluation for each initiative. Implemented weighted matrices and priority charts in deliverables, presenting phased execution plans with Gantt charts and responsibility assignment diagrams.
Contributions
- •Designed a 5-axis weighted scoring function and auto-generated a 6-phase x 3-week roadmap using RANK.EQ formula
Decisions (2)
Objectifying initiative priorities via 5-axis weighted scoring
Designed a scoring function with 5 weighted axes: Q contribution (0.1), C contribution (0.1), D contribution (0.4), monetary cost (0.1), and required effort (0.3). Configured weight distribution to emphasize D (Delivery) contribution and required effort
6-phase x 3-week phased rollout roadmap design
Auto-mapped score rankings to phase numbers using RANK.EQ formula, auto-generating a 6-phase x 3-week Gantt chart. Placed 4-5 initiatives in each phase, with prior phase outcomes serving as prerequisites for subsequent phases in a staged rollout
Outcomes (1)
Before: Priorities among 27 initiatives were unclear, causing delays in executive decision-making
After: 5-axis weighted scoring + 6-phase roadmap enabled construction of an execution plan with month-by-month progress visibility
Top 5 initiative priorities approved in a single executive meeting. Target of 30% release speed improvement and 30% bug rate reduction within 4 months (provisional goals)
Challenges (1)
Building an objective evaluation framework for initiative priorities
Implemented 5-axis weighted scoring in Excel and secured objectivity and transparency of scoring results by pre-aligning weight rationale with the COO
Designing and Creating Executive Meeting Presentation MaterialsMedium
To facilitate consensus-building with non-engineer executives, extensively used flow diagrams, sequence diagrams, and decision branch charts in Mermaid notation. Led the creation of decision-support materials leveraging NotebookLM's RAG platform for executive decision-making.
Contributions
- •Led the creation of a 2-part executive proposal (23+10 slides). Reframed technical challenges as QCD impacts and explained them through causal chain analysis
Decisions (2)
Two-part executive proposal design (talent optimization + ticket platform redesign)
Designed materials as a 2-part structure: 'Development Organization Talent Optimization Proposal for AI Translation Business' (23 slides, overall picture) and 'QCD Improvement Through Ticket Management Platform Redesign' (10 slides, deep-dive)
Jira unified platform proposal with tool comparison for decision support
Created a comparison table evaluating 4 options (Notion, Planio, Notion+Planio hybrid, Jira) across 5 axes: ticket structure flexibility, cross-department collaboration, UI/UX, workflow design, and tool integration. Recommended Jira + Jira Service Management
Outcomes (1)
Before: No means to explain technical challenges to executives, making it difficult to secure approval for improvement investments
After: Visualized the overall QCD improvement picture and concrete initiatives with a 2-part executive proposal (23 slides + 10 slides). Supported decision-making with tool comparison tables and RACI charts
COO approved PoC implementation. Decision made to begin ticket management unification and Jira adoption evaluation
Challenges (1)
Explaining technical challenges to non-engineer executives
Reframed technical challenges as QCD (Quality, Cost, Delivery) impacts, explaining through causal chains such as 'bug leakage -> rework effort -> cost increase.' Set KPI targets (40% bug rate reduction, 25% lead time reduction) to quantify improvement effects
Full-Stack Engineer
4 membersManufacturing Industry SaaS Startup•2024-10 — 2025-03FrontendBackendInfraA multi-tenant SaaS for managing equipment maintenance and inspection operations in manufacturing. Handled both NestJS + GraphQL + PostgreSQL backend and React + Apollo Client frontend end-to-end. Designed and implemented core features including RFC5545-compliant recurring task functionality, RBAC+ReBAC 3-axis access control, Google Calendar-style task UI, and field-level incremental save
8 tasks
Full-Stack Engineer
4 membersA multi-tenant SaaS for managing equipment maintenance and inspection operations in manufacturing. Handled both NestJS + GraphQL + PostgreSQL backend and React + Apollo Client frontend end-to-end. Designed and implemented core features including RFC5545-compliant recurring task functionality, RBAC+ReBAC 3-axis access control, Google Calendar-style task UI, and field-level incremental save
Design and Implementation of RFC5545-Compliant Recurring Task FeatureExtreme
Covered yearly, monthly (nth weekday/nth day), weekly (multiple weekdays), and daily recurrence patterns with support for bulk updates, skips, and termination conditions. Designed schema, API, and batch jobs to separate virtual and materialized instances while displaying them on a unified view. Adopted a day-ahead batch materialization architecture using Redis+SQS+EventBridge.
Contributions
- •Analyzed the RFC5545 specification and designed the architecture for recurrence rule expansion, exception handling, and batch materialization. Created specification documents and design retrospectives to systematically record design rationale and alternatives
- •Implemented SQL/API for merging materialized and virtual instances using generate_series + UNION ALL + DISTINCT ON
Decisions (3)
Adopting day-ahead batch materialization (EventBridge+SQS)
Adopted day-ahead batch materialization using EventBridge+SQS+NestJS SQS Consumer
Materialized/virtual instance merge approach using generate_series + UNION ALL
Expanded dates with PostgreSQL generate_series, restored 20+ columns from template JSON definitions, then deduplicated with DISTINCT ON after UNION ALL with materialized records
Integrating 3 time model types in a single model
Prioritizing compatibility with the existing system, adopted a design integrating 3 time model types (date-only, with time, duration-specified) in a single model. Documented an extension design incorporating the timeModel concept in the template for future separation
Outcomes (2)
Before: Recurring task feature was not implemented, requiring manual creation of daily/weekly routine inspections
After: Released RFC5545-compliant recurrence rule feature, enabling automatic generation of Daily/Weekly/Monthly recurring tasks
Reduced manual creation effort for routine inspections
Before: Recurrence design discussions could not converge, with design specifications scattered
After: Created specification documents and retrospectives to systematize current implementation issues and ideal design. Formulated a 6-phase improvement roadmap
Organizational accumulation of design knowledge and clarity of improvement roadmap
Challenges (3)
Designing recurrence rules integrating 3 time model types in a single model
Started with a minimal implementation of date-only (no time support) and documented the ideal design incorporating the timeModel concept in templates as a specification. Clarified the migration path toward future 3-model separation
300+ line SQL from restoring all fields from template JSONB definitions
Progressively built a 300+ line CTE chain, clearly separating CTE responsibilities across stages: recurrence rule expansion, date generation, virtual task generation, merge with materialized tasks, and deduplication. Maintained a maintainable structure while documenting ideal designs such as template reference approach in retrospective documents
Forced batch materialization due to dashboard pivot API constraints
Built a CTE chain within SQL to convert virtual tasks into the same column structure as materialized records. Analyzed alternative approaches in detail in retrospective documents, including two-stage aggregation + application-layer merge leveraging COUNT/SUM associativity (with mathematical proof)
Design, Consensus Building, and Implementation of 3-Axis Access Control (Scope x Resource x Action)High
Defined permissions along 3 axes: scope (headquarters/factory, etc.) x resource x action. Compared 2 approaches — individual assignment vs. role-based assignment — and facilitated team consensus. Maintained consistency by co-managing API authorization and UI display control with CASL Ability.
Contributions
- •Designed an RBAC+ReBAC hybrid ACL model and comprehensively verified 30+ use cases. Detailed decision rationale and alternatives in design documentation
- •Compared individual assignment and role-based assignment approaches, facilitating design consensus with the team
Decisions (3)
Adopting RBAC+ReBAC hybrid ACL model
Adopted RBAC+ReBAC hybrid (future ABAC-extensible) for the DB layer, with a 3-phase staged rollout design for the UI layer
Deny-by-default + template-based permission evaluation
Default deny; if any explicit deny exists, deny; otherwise if any allow exists, allow; if neither, deny — a 3-stage evaluation
Optional hierarchy inheritance via scopeType+inheritChildren
Added an inheritance flag to per-scope role assignments, making inheritance toggle selectable at role assignment time
Outcomes (2)
Before: Access control was not implemented, allowing all users to access all data
After: Designed and achieved team consensus on an RBAC+ReBAC ACL system with 3-tier scope hierarchy (Organization > Site > Project) and 5 system-defined templates
Completed ACL model design and team consensus
Before: ACL requirements were scattered, with no comprehensive verification of 30+ use cases
After: Created design documentation and use case verification tables. Confirmed coverage of 12 use cases (multi-factory assignment, external engineers, auditors, etc.)
Comprehensive requirements verification and design documentation
Challenges (1)
Balancing permission hierarchy design in multi-tenant SaaS
Achieved both flexibility and manageability through optional inheritance via inheritance flags and a 2-layer structure of role templates + individual permission overrides. Documented 30+ use cases and verified coverage of each pattern
Top Screen Rendering Optimization (70%+ Render Time Reduction)High
Minimized re-rendering overhead from filter, list, and detail view interactions through state structure redesign. Focused refactoring on areas with high rendering cost and significant UX impact within limited effort budget.
Contributions
- •Analyzed re-renders using React DevTools Profiler and selectively applied React.memo/useMemo/useCallback to achieve 70%+ render time reduction
Decisions (1)
Eliminating unnecessary re-renders with React.memo + useMemo
Visualized component tree re-renders using React DevTools Profiler, then eliminated unnecessary re-renders with React.memo, useMemo, and useCallback. Achieved over 70% render time reduction
Outcomes (1)
Before: Slow rendering times negatively impacting UX
After: Over 70% reduction in rendering time
Rendering time reduction rate
Challenges (1)
Blanket memoization of all components vs. Profiler-driven selective optimization
Used React DevTools Profiler to visually inspect component tree re-renders. Identified only the actually slow components and selectively applied React.memo/useMemo/useCallback. Achieved 70%+ render time reduction while minimizing effort
Implementing Google Calendar-Style Task Display UIMedium
Implemented a calendar view supporting weekly, monthly, and 3-day display modes. Achieved rounded corner rendering, variable display areas, and scheduler support using CSS Grid/Subgrid.
Contributions
- •Built a custom packing algorithm (row occupancy mapping with top-aligned placement) and designed a reactive update architecture using Apollo Client as SSoT
- •Implemented 3/4/7-day variable view, drag-and-drop date changes, cross-week rounded corners, and CSS scroll snap mobile support from scratch
Decisions (3)
Decision to build calendar UI from scratch
Built the calendar UI from zero using React+CSS without library dependencies. Designed to accept variable day counts (3-day/4-day/weekly, etc.) as external parameters, ensuring the layout does not break at any day-count width
Drag-and-drop date changes and incremental save integration using Apollo Client as SSoT
Designed Apollo Client cache as the Single Source of Truth (SSoT). Built a mechanism where the calendar reactively re-renders whenever the Apollo cache is updated — whether through D&D date changes or incremental saves from the edit modal
Responsive calendar UI with CSS scroll snap for mobile optimization
Adopted a design that switches to a significantly different UI on mobile compared to desktop, with a slide-in task list on date tap. Applied CSS scroll snap to ensure scrolling always snaps to date boundaries, preventing stops at intermediate positions
Outcomes (2)
Before: No calendar UI existed; work schedules were displayed only in a list format with poor overview
After: Implemented a custom calendar UI from scratch with Google Calendar-equivalent interaction. Achieved dynamic switching between 3-day/4-day/weekly views with cross-week event rounded corner display
Provided a UI where users can intuitively grasp and manage work schedules. Full-scratch implementation enables flexible adaptation to requirement changes
Before: Task date management was limited to a table-format list view, making it difficult to visually grasp the overall schedule
After: Built a Google Calendar-style UI from zero. Achieved dense packing display of multi-day/single-day tasks, D&D date changes, real-time updates via Apollo Client SSoT, responsive design (including scroll snap), and variable 3-day/4-day/7-day view switching
Calendar UI completeness and usability
Challenges (3)
Rounded corner UI representation for events spanning weeks
Split events into per-week segments and dynamically applied border-radius classes based on each segment's position (first/middle/last). First segments get left rounded corners, last segments get right rounded corners, and middle segments have no rounding
Responsive layout for variable day-count views
Accepted the day count parameter as a component prop and dynamically calculated column widths in CSS Grid fr units. Event placement logic was also changed to dynamically compute grid-column positions from startDate/endDate
Packing algorithm for multi-day and single-day tasks (gap-free top-aligned layout)
Built a custom packing algorithm managing per-row occupancy state. Multi-day tasks are mapped to rows first, then single-day tasks are placed in the topmost available row. This achieved the same dense layout as Google Calendar
Implementing Type-Specific Validation for Dynamic Form StructuresHigh
Implemented type-specific validation using RHF+Zod for dynamically addable/removable fields (string/number/date, etc.) on templates. Achieved both processing separation and reusability between creation modals and edit screens. Supported active state control, option display control, and cross-field validation.
Contributions
- •Implemented type-specific validation using RHF+Zod for dynamically addable/removable fields (string/number/date, etc.) on templates. Achieved both processing separation and reusability between creation modals and edit screens. Supported active state control, option display control, and cross-field validation.
Implementing On-Blur Incremental Save with Diff DetectionMedium
Implemented incremental save on focus-out targeting only changed fields to prevent data loss. Conducted retry testing under unstable network conditions using DevTools throttling.
Contributions
- •Designed a field-level onBlur incremental save + Command Pattern architecture and implemented a failed command retry mechanism
Decisions (2)
Adopting field-level onBlur incremental save (rejecting form-wide batch save)
Adopted an incremental save approach where each field has an independent react-hook-form instance, performing an isEqual diff check on onBlur events and immediately sending a GraphQL mutation
RPC-style approach encapsulating field changes as command data
Structured each field change as command data, assigned UUIDs, and sent to the backend in an RPC-style pattern. Type-defined the changeable field sets per resource type, treating change operations as serializable data
Outcomes (1)
Before: Form-wide batch save posed risk of input data loss in factory Wi-Fi environments
After: Implemented field-level onBlur incremental save + Command Pattern + failed command retry mechanism. Designed a 3-stage improvement roadmap (localStorage persistence, Service Worker introduction, full offline support)
Significantly reduced data loss risk and established future improvement plan
Challenges (1)
Data preservation in unstable factory Wi-Fi environments
Implemented field-level onBlur incremental save + failed command accumulation via useRef + retry on save button click. On network errors, form values are preserved; on client errors, values are reset to server state — a two-stage error handling approach
Establishing UI Component Directory Structure and Naming ConventionsMedium
Proposed and built consensus on directory structure, naming conventions, and component composition rules to improve reusability of domain-specific components. Established as shared team conventions.
Contributions
- •Proposed and built consensus on directory structure, naming conventions, and component composition rules to improve reusability of domain-specific components. Established as shared team conventions.
Storybook-Driven UI State Visualization and Internationalization FoundationMedium
Visualized all UI states with Storybook to facilitate future display variation support. Implemented multilingual support for card UI (including English wording proposals) and established an internationalization foundation.
Contributions
- •Visualized all UI states with Storybook to facilitate future display variation support. Implemented multilingual support for card UI (including English wording proposals) and established an internationalization foundation.
Frontend Tech Lead
10 membersHR Consulting Listed Subsidiary•2022-10 — 2024-09FrontendTech LeadTestingLed frontend development of a new graduate recruitment management SaaS as tech lead for 2 years. Developed B2B (HR admin dashboard) and B2C (applicant entry interface) in a pnpm monorepo configuration. Designed and implemented core features including a Specification pattern-based dynamic form builder, Suspense-enabled dashboard, and VRT pipeline, driving quality and development efficiency improvements across a 10-person team
8 tasks
Frontend Tech Lead
10 membersLed frontend development of a new graduate recruitment management SaaS as tech lead for 2 years. Developed B2B (HR admin dashboard) and B2C (applicant entry interface) in a pnpm monorepo configuration. Designed and implemented core features including a Specification pattern-based dynamic form builder, Suspense-enabled dashboard, and VRT pipeline, driving quality and development efficiency improvements across a 10-person team
Team Management and Quality Assurance as Frontend Tech LeadHigh
Led task assignment, story point updates, knowledge sharing, PR review culture development, and implementation guide preparation. Introduced automated library updates with Renovate. Hosted team meetings to promote sharing of technical topics, code conventions, and screen specifications.
Contributions
- •Led the entire B2C application as an orchestrator: specification discovery, consensus-building with backend team, task decomposition, member assignment, and owning critical path items
- •Established team quality standards through code review guideline creation, VRT environment setup, and intern mentoring
Decisions (3)
Intern mentoring and code review-driven team quality improvement
Actively conducted code reviews, mentoring intern members through feedback. Adopted an OJT approach of practically conveying coding conventions and design patterns through the review process
Single point of contact with backend team and orchestrator-style management
Personally discovered all B2C application specifications in a short period and conducted thorough 1-on-1 alignment sessions with the backend team lead. After consensus, held onboarding meetings for B2C team members to explain all specifications. Functioned as a one-stop window, consolidating team questions and resolving them with the backend team
Task decomposition, dependency mapping, and member-strengths-based assignment
Decomposed specifications into tasks, clarified dependencies, and created Jira tickets. Assigned tickets based on member strengths, weaknesses, skill levels, and preferences. Proactively took on items that were single points of failure (critical path) in task dependency chains
Outcomes (1)
Before: Inconsistent frontend quality with CSS regression oversights
After: Built a visual regression testing environment with Storybook+storycap+reg-suit, automatically detecting UI diffs per PR. Also established code review guidelines
Established automated UI quality assurance framework
Challenges (3)
Balancing technical debt and development velocity as FE tech lead
Built a visual regression testing environment with Storybook+storycap+reg-suit to automatically ensure UI quality. Established code review guidelines to raise quality standards across the entire team
Detailing B2C applicant application specifications within a 4-month timeframe
Led from specification detailing as tech lead. Organized applicant user flows and systematically defined transition conditions, display content, and validation rules for each status. Agile-confirmed specifications in parallel with implementation
Discovering dynamic form specification edge cases and achieving backend consensus
Decided that the zero-option case would be handled via B2B-side form configuration with customer support intervention, with the B2C application not displaying any alert. Individually reached consensus with the backend team lead on each such edge case, documented decisions, and shared with the team
Applicant Entry Flow Specification and ImplementationHigh
Detailed specification development and frontend implementation for the registration, job application, and selection step entry flow. Completed all feature development within a 4-month timeline, receiving high praise from the client.
Contributions
- •Designed a unified approach for 4 form flows using a shared DynamicForm foundation, implementing multi-page validation and transition control
Decisions (2)
Implementing 4 B2C form flows on a shared DynamicForm foundation
Adopted a design centered on form specification definition classes, sharing common useForm hooks, input component libraries, and validation systems, while individually defining only page structure, submission targets, and parameter differences per flow
Multi-page form page-level validation and transition control
Managed page-level validation state with useFormState while converting between URL page indices (1-based) and array indices (0-based). Executed trigger() at page scope to block navigation to unvalidated pages
Outcomes (2)
Before: A 4-month development timeline constraint
After: Completed all feature development within the timeline, receiving high praise from the client
Development completion rate and client satisfaction
Before: No applicant entry forms existed, with the B2C side of the recruitment management SaaS undeveloped
After: Implemented 4 form flows (registration, pre-entry, profile update, My Page tasks) on a shared DynamicForm foundation. Achieved 24 input component types, 50+ validation rules, and multi-page navigation
B2C applicant form foundation completeness
Challenges (2)
Managing complex state transitions in the applicant entry flow
Created detailed state transition diagrams during specification phase to visualize all patterns. Implemented type-level prevention of invalid transitions. Completed all features within the 4-month development timeline
Mapping server-side validation errors to field level
Determined GraphQL validation errors within useEffect, separating screen-wide banner errors from field-level errors. Unified error handling through a custom useForm hook
Designing and Implementing Dynamic Form Builder for HR (Specification Pattern Adoption)Extreme
Implemented a form builder enabling HR staff to configure pages, headings, input fields, validation, parent-child relationships, etc. Solved class state issues with the Specification pattern while maintaining consistency with RHF. Achieved a design balancing cohesion and extensibility.
Contributions
- •Designed and implemented a cross-field validation foundation with Specification pattern x 50+ Yup custom methods. Built a 3-layer form generation engine
- •Implemented reactive option filtering with parent-child field linkage + useWatch + automatic selected value clearing
Decisions (4)
Dynamic form validation design using the Specification pattern
Adopted the Specification pattern (from Domain-Driven Design) to implement a composable design where conditions are treated as objects
Cross-field validation foundation with Specification pattern x Yup custom methods
Added 50+ custom methods to Yup schemas, applied uniformly across all schema types. Declaratively described dependent fields via metadata, automatically building the dependency graph
Separating validation into a shared package in pnpm monorepo
Adopted a 3-package structure with pnpm workspace: B2B, B2C, and shared. Placed the validation foundation in the shared package for re-export to B2B/B2C. Centrally managed GraphQL-derived enum definitions via an enum registry
Reload-free reactive option filtering via parent-child field linkage
Used useWatch() in parent-child linked components to reactively observe parent field value changes. Passed filter functions to child components and filtered options via useMemo. A selected value resetter automatically clears invalidated selections
Outcomes (2)
Before: Application form conditions were hardcoded, requiring code changes for every condition modification
After: Implemented declarative condition definitions using the Specification pattern, enabling HR staff to configure form conditions without code
Self-service form condition changes
Before: Form fields were hardcoded, requiring engineer implementation for every field addition or change
After: Dynamic form generation engine enables HR staff to freely configure form fields. Achieved a dynamic form foundation with 50+ validation rules, 24 input component types, cross-field validation, and reactive option filtering. Delivered to both B2B/B2C via shared package
Dynamic form foundation flexibility and quality
Challenges (3)
Combinatorial explosion of condition expressions in HR dynamic forms
Adopted the Specification pattern (from DDD), designing conditions as first-class objects composable with AND/OR/NOT. Implemented JSON Schema-like declarative condition definitions
Synchronization control between cross-field validation and reactive UI
A selected value reset component detects option changes and immediately clears invalid values. Automatically built a field dependency graph, using React Hook Form's deps option to auto-trigger dependent field re-validation. Wildcard matching of array indices also handles dynamic form dependencies
Designing a schema-driven dynamic form generation engine
Designed 3-layer specification definition classes: form-level, page-level, and field-level. Each layer dynamically builds validation schemas, auto-generating per-page schemas. TypeScript type parameters also ensure form value type safety
HR Dashboard Detailed Specification and Suspense-Enabled ImplementationHigh
Detailed specification development and implementation of the top screen dashboard. Applied Suspense to the overall layout and 3 panel types. Identified render causes with React Profiler, improving both actual performance and perceived wait times.
Contributions
- •Designed and implemented an independent widget data-fetching architecture (Suspense+ErrorBoundary) and custom Masonry grid algorithm
Decisions (3)
Adopting independent widget data-fetching architecture
Adopted an architecture where each widget independently fetches its data. Used Apollo Client useReadQuery to complete Suspense-compatible data fetching within each widget component
Full-scratch implementation of custom Masonry grid layout
Implemented a custom grid placement algorithm. Tracked current row indices of left and right columns, placing widgets in the shorter column to achieve the Masonry effect
Widget drag-and-drop reordering with dnd-kit v6
Adopted @dnd-kit/core v6.1.0 + @dnd-kit/sortable v8.0.0. Managed lists with SortableContext and controlled D&D state per widget with the useSortable hook. Implemented DragOverlay for in-motion preview display
Outcomes (1)
Before: Dashboard data fetching used a waterfall pattern, blocking all interactions until all widgets finished loading. Gap issues in widget placement also existed
After: Achieved independent fetching + skeleton display with React Suspense + Apollo useReadQuery + Material UI Skeleton. Gap-free placement with custom Masonry grid. Implemented D&D reordering with dnd-kit v6 and 1-column/2-column toggle
Improved UX where each widget independently transitions from loading to rendered. Achieved loosely coupled architecture extensible for future third-party marketplace integration
Challenges (2)
Resolving waterfall problem with 20 simultaneous widget data fetches
Migrated to Suspense-compatible data fetching using Apollo Client 3.10 useReadQuery. Wrapped each widget in a React Suspense boundary with Material UI Skeleton as fallback. Applied individual ErrorBoundary per widget to prevent a single API failure from propagating to other widgets
Custom grid layout implementation in CSS Masonry-unsupported environments
Implemented a custom placement algorithm. Tracked current row indices of left and right columns, placing each widget in the shorter column. Achieved Masonry-like gap-free placement by dynamically calculating grid-row-start/grid-row-span on CSS Grid
Cross-Cutting Concerns: Error Handling, Caching, and Access ControlHigh
Implemented error handling, cache reset, query header parameter injection, redirects, query batching, and auto-generated validation fixes from GraphQL schema.
Contributions
- •Implemented error handling, cache reset, query header parameter injection, redirects, query batching, and auto-generated validation fixes from GraphQL schema.
Building Visual Regression Testing Environment with Storybook+storycap+reg-suitHigh
Integrated Storybook with storycap and reg-suit into GitHub Actions CI for UI regression testing. Also set up Playwright regression E2E tests in the CI environment.
Contributions
- •Designed and built a VRT pipeline with Storybook+storycap+reg-suit+GitHub Actions+S3, establishing a VRT culture across the team
- •Implemented parallel B2B/B2C screenshot capture via matrix strategy, 0.1% threshold diff comparison, and automatic PR comment posting
Decisions (2)
VRT pipeline design with Storybook v8 + storycap + reg-suit + S3
Built the Storybook environment with @storybook/react-vite v8.1.5, automated screenshot capture with storycap v5.0.0, pixel diff comparison with reg-suit (0.1% threshold), result publishing to AWS S3, and a diff review flow via GitHub PR notifications
Parallel B2B/B2C VRT execution via GitHub Actions matrix strategy
Designed a 2-stage pipeline where GitHub Actions matrix strategy captures B2B/B2C screenshots in parallel with storycap and uploads as artifacts, with a subsequent VRT job integrating and running reg-suit
Outcomes (2)
Before: Unintended UI changes (CSS regressions) were discovered only after release
After: storycap+reg-suit automatically compares screenshots per PR, enabling 100% pre-merge detection of CSS regressions
CSS regression detection rate
Before: UI change quality verification relied solely on manual visual inspection, with overlooked regression bugs discovered post-release
After: Built a VRT pipeline with Storybook v8.1.5 + storycap + reg-suit + GitHub Actions + S3. Automatically compared screenshots of all UI components across B2B (215 stories) and B2C (49 pages) per PR
Automated visual regression detection at 0.1% pixel diff threshold. PR comment-based diff image review became standard, reducing post-release UI regression bugs
Challenges (2)
Stabilizing storycap timeout and asset waiting
Set screenshot: { waitAssets: true } as a default parameter in preview.tsx to wait for asset loading completion. Configured serverTimeout 60000ms and captureTimeout 15000ms for storycap execution. Adjusted per-story delays to build a stable screenshot environment
Establishing VRT culture across the team
Introduced PR comment-based diff image display via reg-notify-github-plugin and established a team rule to include diffs in reviews. Promoted component development on Storybook, building a development process where story creation naturally integrates as part of VRT
Migration from react-admin to Apollo Client/RHF/MUI and GraphQL Suspense IntroductionHigh
Proposed and drove the migration from react-admin to Apollo Client, RHF, and MUI to improve development efficiency. Evaluated and fully adopted GraphQL Suspense and React Suspense for display speed improvements.
Contributions
- •Proposed and drove the migration from react-admin to Apollo Client, RHF, and MUI to improve development efficiency. Evaluated and fully adopted GraphQL Suspense and React Suspense for display speed improvements.
Smoke Test Team Leader and Testing FacilitationMedium
Served as testing phase leader, proactively taking on the driver role. Shared screen and transition specifications when other members drove. Led bug ticket creation and test status management.
Contributions
- •Served as testing phase leader, proactively taking on the driver role. Shared screen and transition specifications when other members drove. Led bug ticket creation and test status management.
LIFF Frontend / Native App / Backend Engineer
7 membersMobile Order Application Company•2022-04 — 2022-09FrontendBackend4 tasks
LIFF Frontend / Native App / Backend Engineer
7 membersMulti-Platform Frontend Development: Web/LIFF/Native AppHigh
Handled all development across Web (Next.js), LIFF app, and native app (React Native/Expo). Implemented a wide range of domain logic including order management, store LINE integration, POS integration, inventory management, and closing processes.
Contributions
- •Handled all development across Web (Next.js), LIFF app, and native app (React Native/Expo). Implemented a wide range of domain logic including order management, store LINE integration, POS integration, inventory management, and closing processes.
Mobile Order Internationalization (English & Chinese)Medium
Researched English and Chinese app UIs with the premise that logos and text display should not break regardless of user device, and that meanings should be concisely understandable. Iterated on UI improvements through prototyping and discussions with designers and the PO.
Contributions
- •Researched English and Chinese app UIs with the premise that logos and text display should not break regardless of user device, and that meanings should be concisely understandable. Iterated on UI improvements through prototyping and discussions with designers and the PO.
POS System Interim Closing Implementation (Shared Logic with Final Closing & Unit Tests)Medium
Extracted shared processing between interim and final closing, resolved variable naming inconsistencies. Added unit tests to achieve a low-debt implementation.
Contributions
- •Extracted shared processing between interim and final closing, resolved variable naming inconsistencies. Added unit tests to achieve a low-debt implementation.
Kitchen Display Order Status Aggregation by Table, Menu Item, and Time PeriodMedium
Implemented features and UI improvements for the kitchen display. Built aggregation of order status by table, menu item, and time period.
Contributions
- •Implemented features and UI improvements for the kitchen display. Built aggregation of order status by table, menu item, and time period.
Frontend / Backend Engineer
5 membersBoard of Directors DX Service Company•2022-03 — 2022-05FrontendBackend3 tasks
Frontend / Backend Engineer
5 membersUI Component Implementation and Storybook Organization (Atomic Design)Medium
Reorganized Storybook directory structure following Atomic Design to solve the low discoverability and searchability of UI components. Cataloged all UI components in Storybook to improve screen implementation efficiency.
Contributions
- •Reorganized Storybook directory structure following Atomic Design to solve the low discoverability and searchability of UI components. Cataloged all UI components in Storybook to improve screen implementation efficiency.
Document Preparation Assistant and Written Resolution Screen Implementation with E2E TestsMedium
Detailed implementation of document preparation assistant and written resolution screens. Ensured quality through Playwright E2E test implementation.
Contributions
- •Detailed implementation of document preparation assistant and written resolution screens. Ensured quality through Playwright E2E test implementation.
Backend Implementation of Scheduling FeatureMedium
Backend implementation of the scheduling feature using Node.js/Express/GraphQL/Prisma.
Contributions
- •Backend implementation of the scheduling feature using Node.js/Express/GraphQL/Prisma.
Frontend Engineer
1 membersFreelance•2021-05 — 2022-03Frontend1 tasks
Frontend Engineer
1 membersReact/Next.js SPA Website Development (4 Projects)Medium
Developed SPA websites for a web design agency, recruitment company, data analytics firm, and restaurant. Handled frontend application integration with CMS (WordPress/Contentful, etc.) and hosting on Vercel/Netlify/S3.
Contributions
- •Developed SPA websites for a web design agency, recruitment company, data analytics firm, and restaurant. Handled frontend application integration with CMS (WordPress/Contentful, etc.) and hosting on Vercel/Netlify/S3.
Data Engineer / Frontend Engineer
3 membersBitkey, Inc.•2020-08 — 2021-03DataFrontend4 tasks
Data Engineer / Frontend Engineer
3 membersCompany-Wide Shared KPI Definition and DesignHigh
Organized KPIs related to management, product, sales, quality, and usage, defining the set of metrics to be shared across all employees. Led metric design to foster a cross-team culture of collaborative insight.
Contributions
- •Organized KPIs related to management, product, sales, quality, and usage, defining the set of metrics to be shared across all employees. Led metric design to foster a cross-team culture of collaborative insight.
Building Data Aggregation Pipeline from Multiple Sources to BigQueryHigh
Implemented scheduled jobs on AWS Lambda and Cloud Functions to aggregate data scattered across Amazon Redshift, Amazon Aurora, Salesforce, and Cloud Firestore into BigQuery. Also handled semi-structured data transformation and automated aggregation.
Contributions
- •Implemented scheduled jobs on AWS Lambda and Cloud Functions to aggregate data scattered across Amazon Redshift, Amazon Aurora, Salesforce, and Cloud Firestore into BigQuery. Also handled semi-structured data transformation and automated aggregation.
Google Data Portal Dashboard Design, Implementation, and Company-Wide AdoptionMedium
Designed and implemented dashboards in Google Data Portal, continuously visualizing sales, quality, and usage metrics. Drove data-driven culture adoption through office entrance panel installation, employee portal placement, and weekly meeting presentations.
Contributions
- •Designed and implemented dashboards in Google Data Portal, continuously visualizing sales, quality, and usage metrics. Drove data-driven culture adoption through office entrance panel installation, employee portal placement, and weekly meeting presentations.
Town Portal Site UI Component ImplementationMedium
A town portal site for resident information sharing in a smart lock-equipped new town development. Implemented shared UI components across multiple screens in collaboration with UI designers. Created a UI catalog with Storybook.
Contributions
- •A town portal site for resident information sharing in a smart lock-equipped new town development. Implemented shared UI components across multiple screens in collaboration with UI designers. Created a UI catalog with Storybook.
Frontend Engineer / Tester / Maintenance Operations
9 membersSimplex Inc.•2019-06 — 2020-06FrontendTesting3 tasks
Frontend Engineer / Tester / Maintenance Operations
9 membersVBA Excel Frontend Application with Java JSON API IntegrationHigh
Developed an application that communicates with a Java JSON API via VBA and displays data in Excel. Implemented dynamic column addition based on JSON response data, with Excel formulas embedded in each column. Focused on readable naming conventions.
Contributions
- •Developed an application that communicates with a Java JSON API via VBA and displays data in Excel. Implemented dynamic column addition based on JSON response data, with Excel formulas embedded in each column. Focused on readable naming conventions.
On-Site Testing, Release Operations, Maintenance, and Client SupportMedium
Handled on-site testing and release operations using shell commands and AWS. Led client email inquiry handling, enhancement project basic design, bug ticket creation, and status confirmation in regular meetings.
Contributions
- •Handled on-site testing and release operations using shell commands and AWS. Led client email inquiry handling, enhancement project basic design, bug ticket creation, and status confirmation in regular meetings.
Vue.js Frontend Implementation for Insurance Company Registration ApplicationMedium
Refined help tooltip and modal specifications with the designer and implemented them across all input fields on every screen. Implemented user input UI across multiple screens. Managed business scenario testing and system testing.
Contributions
- •Refined help tooltip and modal specifications with the designer and implemented them across all input fields on every screen. Implemented user input UI across multiple screens. Managed business scenario testing and system testing.
Data Engineer (Intern)
2 membersGraph, Inc.•2018-01 — 2019-03Data3 tasks
Data Engineer (Intern)
2 membersApparel E-Commerce Recommendation Engine Prototype (3 Algorithm Types)High
Developed a recommendation engine prototype for product suggestions displayed on the top page, product pages, and cart. Applied content-based filtering and collaborative filtering for new customers, existing customers, and product pages respectively, with serendipity considerations in the design.
Contributions
- •Developed a recommendation engine prototype for product suggestions displayed on the top page, product pages, and cart. Applied content-based filtering and collaborative filtering for new customers, existing customers, and product pages respectively, with serendipity considerations in the design.
Customer Classification Using k-Nearest Neighbors and Purchase Behavior AnalysisMedium
Classified existing customers using k-nearest neighbors for the apparel e-commerce executive team's marketing strategy planning. Performed basic aggregation of purchase behavior (product category sales, etc.) by customer segment.
Contributions
- •Classified existing customers using k-nearest neighbors for the apparel e-commerce executive team's marketing strategy planning. Performed basic aggregation of purchase behavior (product category sales, etc.) by customer segment.
Full-Stack Development and Deployment of Demo ChatbotMedium
Determined design specifications, implemented screens and API (Python/Flask) for a demo chatbot. Deployed the application to S3 and EC2.
Contributions
- •Determined design specifications, implemented screens and API (Python/Flask) for a demo chatbot. Deployed the application to S3 and EC2.
Education
BA in Liberal Arts
Exchange programs at the University of Cambridge and the University of Toronto. Achieved TOEIC 960, IELTS 7.0, and HSK Level 6.