System Design & Architecture: Design, implement, and maintain highly scalable, fault-tolerant distributed data processing systems (batch and streaming). End-to-End Ownership: collaborative ownership of features from technical design and implementation to testing, deployment, and live-site monitoring. Modernization & AI Integration: Integrate AI-driven capabilities into data pipelines to improve efficiency and lead the adoption of AI Coding Agents to accelerate developer velocity and code quality…
Required Qualifications
Bachelor's Degree in Computer Science or related technical field AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python. Experience: 2+ years of professional software development experience with a focus on backend services or data engineering. Coding Proficiency: Proficiency in at least one modern programming language (C#, Java, Scala, Python, or C++) with a strong understanding of object-oriented design and data structures. AI-Assisted Development: Demonstrated experience using AI coding assistants (e.g., GitHub Copilot, Cursor, or custom agents) to rapid-prototype, refactor, and ship production-quality code. DevOps Mindset: Experience with CI/CD pipelines, containerization (Docker/Kubernetes), and infrastructure-as-code. Cloud Native: Deep understanding of cloud ecosystems (Azure, AWS, or GCP), specifically storage (Data Lake/Blob) and compute resources. Growth Mindset: Excellent communication skills with the ability to navigate cross-team dependencies and drive clarity in ambiguous situations. Master's Degree in Computer Science or related technical field AND 3+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python. OR Bachelor's Degree in Computer Science or related technical field AND 5+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python. OR equivalent experience. Distributed Systems: Hands-on experience with distributed computing frameworks (e.g., Spark, Flink, Hadoop, Kafka, Databricks). Understanding of partitioning, sharding, and consistency models. Big Data Architecture: Experience building Lambda or Kappa architectures for real-time and batch processing. Cost Optimization: Proven track record of optimizing compute/storage resources to reduce operational costs in a high-scale environment.
Original Posting
This role is sourced from Microsoft. Apply on Microsoft careers page