(Programming) Sr Data Apps Engineer
The role
As a Sr Data Apps Engineer at Miso Robotics, you will design, build, and operate the data infrastructure that supports perception, autonomy, and product analytics for kitchen automation systems, working closely with machine learning and software engineering teams to ensure high-quality, reliable data flows from deployed robots to training, evaluation, and production environments. You will own end-to-end data pipelines for sensor data (including cameras), telemetry, diagnostics, and performance data, leveraging AWS and GCP cloud environments and SQL-based data platforms to enable scalable ingestion, storage, analysis, and reporting. You will partner with ML engineers to support model training, validation, and monitoring, and collaborate with software teams to integrate data capture, logging, and feedback loops into production kitchen deployments.
You’ll play a key role in shaping the data infrastructure behind our cloud and edge solutions. You’ll work alongside a group of talented engineers and take ownership of designing, building, and maintaining robust data systems that support real-time insights, automation, and analytics. From ingesting edge device data to optimizing cloud-based pipelines, your work will directly impact the performance, scalability, and reliability of our end-to-end Robotics platform. This is a unique opportunity to contribute to the evolution of a technology stack that powers intelligent systems across industries. You’ll be part of a fast-moving team that values innovation, ownership, and engineering excellence—and you'll help define how data is collected, monitored, and acted on in critical environments. You'll work with cutting-edge AWS and IoT technologies to deliver high-impact, end-to-end data pipelines for both batch and real-time analytics, collaborating closely with cross-functional teams.
What You’ll Do
- Develop scalable data pipelines for sensor data, telemetry, diags, &performance metrics
- Build cloud data infrastructure using AWS and GCP for analytics and ML workloads
- Develop and optimize SQL data models and queries for reporting and analysis
- Partner with ML engineers to support dataset generation, training, validation, and monitoring
- Collaborate with software teams to integrate data capture and logging into production robots
- Define data quality checks, validation, and governance standards
- Enable system diagnostics and health monitoring through metrics, dashboards, and alerts
- Build & Manage Data Pipelines: Develop secure and scalable data ingestion and transformation pipelines using AWS services (Glue, Lambda, Athena, Timestream, S3) and edge technologies (MQTT, AWS IoT Greengrass).
- Infrastructure as Code (IaC): Define, develop, & deploy cloud infrastructure using Terraform.
- SQL & Data Transformation: Write, optimize, and maintain complex SQL queries for data transformation and validation across AWS data services.
- Observability & Security: Implement and maintain observability tools (CloudWatch, Grafana) and ensure robust data security using best practices for IAM, encryption, and access control.
- Cross-Functional Collaboration: Partner with software, DevOps, and product teams to deliver comprehensive data solutions.
Requirements
- BS Computer Science
- 5+ years of relevant data and apps experience pushed to production at scale
- Data pipeline design and orchestration
- Cloud data platforms on AWS and GCP
- SQL and data modeling for analytics
- Python and Javascript / Typescript
- Sensor, telemetry, and log data handling
- ML data support and dataset management
- Data quality, validation, and governance
- Monitoring, metrics, and diagnostics systems
- Performance and cost optimization
- Linux, Docker, Terraform, and Git
- Excellent written and verbal interpersonal skills
- Attention to detail, analytical skills, and ability to learn / adapt quickly
- Collaborative team player with a drive to take initiative and ownership
- Experience with low-latency edge-to-cloud data pipelines.
- Familiarity with React/Node.js for internal tools or dashboards.
- Hands-on experience with a range of AWS services & IaC tools (Terragrunt, Atlantis).
- Exposure to additional monitoring tools (e.g., Prometheus) or end-to-end testing frameworks.