🏏

Open Positions

Site Reliability Engineer

The role in a nutshell:

You will build an efficient path to production for various software systems. You will also ensure the reliability and scalability of these systems in production.

The biggest highlight of this role:

You work on software systems that are live in production and get to see the business impact of your code changes.

Your day-to-day responsibilities:

  • Infrastructure automation - You will create infrastructure as a code (IaC) and automate manual processes using tools like Bash.
  • Deployment automation - You will automate the deployment of applications and services to staging and production environments. This includes building CI, CD pipelines, containerization and orchestration of workloads, configuration management, etc.
  • Ensuring scalability - You will build auto-scaling systems that scale up or down based on user demands.
  • Ensuring observability - You will build observability into systems, making it easier to find and resolve issues before they blow up in production.
  • Performance and Cloud cost optimization - You will implement ways to improve system performance and optimize cloud costs.
  • Client engagement - At One2N, you will work directly with client teams daily. You will drive and own key project decisions.
  • Evaluating technology choices and approaches - “Should I choose Kubernetes or Nomad?”. “Should I self-host my CI servers or use a SaaS solution?” These are some of the questions that you will explore answers to, considering the possible technical and financial trade-offs.
  • Documentation - Meticulously create RCAs, runbooks, and checklists and follow them diligently.
  • Support on call when needed - You own the reliability of live systems on production. Therefore, we expect you to be available on-call to fix prod issues. We keep checks and balances (e.g., “follow the sun” on-call rotation) to ensure your work-life balance.

Technologies and concepts you will work on:

Docker, Kubernetes, Terraform, GitHub Actions, ArgoCD, AWS, GCP, Bash, Prometheus, Grafana, Loki, ELK stack, Datadog, etc. Our programming language of choice is Go.

Some of our work includes:

  • Auto-scaling eKYC Machine Learning workloads to handle 2 Million API requests per day
  • Migrating 1.3TB of primary data from self-hosted MySQL to GCP CloudSQL
  • Building a control plane for multi-cluster Kubernetes setup
  • Implementing GitOps for continuous deployment of microservices
  • Migrating background jobs from VMs to Kubernetes using KEDA.

What we expect you to possess:

Must have skills:
  • Understanding basic bash scripting and computer networking (SSH, TCP, HTTP).
  • Experience with using a programming language (we primarily use Go) to build a basic REST API.
  • Experience with using Git as a version control system.
  • Experience with any of the cloud providers (AWS, GCP, etc.) to deploy a three-tier web app.
  • High-level idea of system components (databases, cache, reverse proxies, CDNs) to understand how and where they fit in the big picture.
  • Experience in creating CI, CD pipelines to build and deploy at least a simple REST API application to dev/prod environments.
Good to have skills:
  • Ability to take code from the local to prod by implementing Continuous Integration and Delivery principles.
  • Exposure to building, scaling, and deploying software using 12-factor app (https://12factor.net/) principles.
  • Experience of working with Microservices and using container orchestration tools like Kubernetes/Nomad.
  • Experience with using Observability tools and setting up monitoring and alerting for microservices using Prometheus, Grafana, Loki, ELK stack or Datadog, and the like.
  • Implementing everything as code - from infra to policies, security, configuration, etc.- using tools like Terraform, OPA, Ansible, etc.
  • Experience with building cloud-agnostic homogenous deployment solutions.
 
Don’t worry too much about being a perfect fit for each of these requirements. If you believe that you have the potential to take up this role, feel free to apply.

👉 Apply Here


Backend Software Engineer

The role in a nutshell:

You will build and ship product features for backend software systems in production.

The biggest highlight of this role:

You will work on features that move business metrics. Your code will make systems measurably more efficient, scalable, and responsive.

Your day-to-day responsibilities:

  • We work with software systems that are live in production. So, you will need to work within the technology and business constraints of these systems. For e.g., if you have to live-migrate existing users to a new system, we’d expect you to use a StranglerFig pattern instead of stopping the world and rewriting the monolith as microservices.
  • Work on features that move a business metric - Your code will add measurable business value to clients. For e.g., You’ll help increase daily active users on a platform, build integrations with third parties, increase transaction volume on a payment gateway, etc.
  • Apply a product-thinking mindset - Don’t just be a feature factory for cranking out one feature after another. Instead, evaluate how / whether your features add measurable value to the business metrics. The real iteration starts when you make things live on production.
  • Optimize for early feedback - A bug on production is far costlier than a bug on the local environment. You’ll set up and follow practices such as Feature Flags, Canary rollouts, A/B tests, etc., to enable early feedback and ensure minimal blast radius.
  • Find out unknowns by doing POCs - You’ll not always know all requirements with certainty. We expect you to discover the unknown by asking questions and doing POCs.
  • Design APIs for business use cases - Given a business requirement, you will sketch out high-level APIs (sync, async, monolith, microservices as needed), which will be consumed by web and mobile teams.
  • Design data models and schema - You’ll design data models and database schema for business requirements keeping both transactional and analytics use cases (OLTP and OLAP) in mind.
  • You build it, you test it - We don’t have a separate manual QA team. At One2N, you are the owner of your code’s quality.
  • You build it, you run it - We work in small teams that own the uptime of the systems we build. We don’t throw our build artifacts to the Ops teams over the wall. Instead, we work closely with them to ensure high availability and reliability.
  • Client engagement - At One2N, you will work directly with client teams daily. You will drive and own key project decisions.
  • Evaluating technology choices and approaches - “Should I use an ORM or write plain SQL queries?”, “Does it make sense to build microservices, or should we start with a monolith?” etc. These are some of the questions that you will explore answers to, considering the possible technical and business trade-offs.
  • Documentation - You’ll document product architecture and engineering choices. Create sequence diagrams, ER diagrams, component diagrams, etc.
  • Code and architecture review - You’ll review architectural changes and suggest ways to improve system architecture. You’ll also ensure code quality via pairing and PR reviews.

Technologies and concepts you will work on:

Go/Java, REST APIs in these languages (Gin, net/http, SpringBoot), DB design (PostgreSQL, MySQL), unit testing, Redis, Messaging systems like RabbitMQ, Kafka, Microservices, Docker, GitHub Actions, etc.

Some of our work includes:

  • Designing and implementing backend APIs for a financial services startup to provide non-collateral loans to 3.5 million users. Also, automating financial report generation, thus reducing loan approval time from 7-8 days to 1 day.
  • System design and architecture for auto-scaling eKYC Machine Learning workloads to handle 2 Million API requests/day.
  • Building data pipelines for ingesting, storing, and processing 50 million daily events from infotainment IoT devices.
  • Designing and implementing a backup and restore solution for IBM QRadar to securely backup 1.5TB of data daily to S3. Also, implementing an automated restore procedure.
  • Developing a microservice for a fintech company, enabling real-time money transfers using virtual account numbers.
  • Building a SaaS platform to detect and remediate vulnerabilities across modern tech stack based applications.
  • Building backend REST APIs for creating and managing National Health ID and health records for individuals.

What we expect you to possess:

Must have tech skills:
  • Good understanding of Object Oriented (and optionally, Functional programming) paradigms.
  • Ability to design REST APIs and Relational DB design, given a business use case.
  • Unit testing your code.
  • Experience with using Git as a version control system.
  • Understanding of CI CD, software packaging, and distribution. You may not have built CI CD pipelines yourself, but are at least familiar with the fundamental concepts.
  • High-level idea of system components (databases, cache, reverse proxies, CDNs) to understand how and where they fit in the big picture.
  • Exposure to how frontends work and building APIs that are empathetic to frontend teams.
  • Knowing how software runs on production (basic exposure to observability, cloud, etc.)
Good to have tech skills:
  • Ability to evaluate Build vs. Buy trade-offs
  • Experience in building modular monoliths or decoupled micro-services.
  • Build homogenous software that can run seamlessly across on-prem and cloud environments without requiring any major redesigns.
  • Experience of working with Microservices and modular monoliths.
 
Don’t worry too much about being a perfect fit for each of these requirements. If you believe that you have the potential to take up this role, feel free to apply.

👉 Apply Here


Â