Data Engineering

Site Reliability Engineering Manager (GCP)

Role Overview
We are looking for an experienced Site Reliability Engineering (SRE) Manager to lead a team of highly skilled SREs in managing, automating, and optimizing our cloud infrastructure on Google Cloud Platform (GCP). The SRE Manager will be responsible for ensuring the reliability, availability, and performance of critical services while driving automation and operational excellence having 8+ years of experience.
As an SRE Manager, you will work closely with development, infrastructure, and security teams to implement scalable, resilient, and high-performance solutions. This role is ideal for someone passionate about reliability engineering, cloud automation, and observability.
Key Responsibilities:

Leadership & Team Management
• Lead, mentor, and grow a team of Site Reliability Engineers, fostering a culture of innovation, collaboration, and continuous learning.
• Define and drive SRE best practices, focusing on reliability, automation, monitoring, and incident response. • Collaborate with development, DevOps, and security teams to align infrastructure and application reliability with business objectives.
• Own SRE roadmap and strategy, ensuring alignment with organizational goals and industry best practices.
Reliability & Performance
• Ensure the uptime, availability, and performance of critical applications hosted on GCP.
• Implement SLOs (Service Level Objectives), SLIs (Service Level Indicators), and SLAs (Service Level Agreements) to measure system reliability.
• Conduct root cause analysis (RCA) for production incidents and drive post-mortems to improve system resilience.
Automation & CI/CD
• Automate infrastructure management using Infrastructure-as-Code (IaC) tools such as Terraform or Pulumi. • Improve CI/CD pipelines using GitOps methodologies to enable faster and reliable deployments. • Champion self-healing architectures to minimize manual intervention.
Observability & Incident Management
• Implement and enhance monitoring, logging, and alerting using tools like Prometheus, Grafana, Stackdriver (Cloud Monitoring), and Open Telemetry.
• Develop on-call rotations, runbooks, and incident management processes to minimize downtime and improve MTTR (Mean Time to Resolution).
• Use AI/ML-based anomaly detection for proactive monitoring.
Security & Compliance
• Ensure security best practices for IAM, networking, and data encryption within GCP.
• Conduct security audits and work with compliance teams to ensure adherence to SOC2, ISO 27001, HIPAA, or other

regulatory frameworks.
• Implement zero-trust security models and automated compliance policies.
Cost Optimization & Capacity Planning
• Optimize cloud costs using GCP cost management tools, rightsizing, and auto-scaling.
• Implement capacity planning strategies to balance cost and performance.
• Work with finance teams to forecast infrastructure costs and optimize spend.
Required Skills & Qualifications:

Technical Skills
• Strong expertise in Google Cloud Platform (GCP) services such as GKE, Cloud Run, Cloud Functions, Cloud SQL • BigQuery, and Cloud Spanner.
• Hands-on experience with Terraform, Pulumi, or Cloud Deployment Manager for Infrastructure-as-Code (IaC). • Experience with CI/CD tools like GitHub Actions, ArgoCD, Spinnaker, or Jenkins.
• Strong knowledge of Kubernetes (GKE) and container orchestration.
• Experience with SRE principles such as error budgets, chaos engineering, and observability. • Strong scripting and automation skills in Python
• Experience with monitoring and observability tools (Stackdriver, Datadog, Prometheus, Grafana, New Relic).
Leadership & Soft Skills
• Proven experience managing and mentoring SRE teams.
• Strong problem-solving skills with the ability to troubleshoot complex production issues. • Ability to work in a fast-paced, DevOps-oriented environment.
• Strong communication and stakeholder management skills.
• Experience collaborating with cross-functional teams, including engineering, security, and product teams.
Preferred Qualifications

• GCP Professional Cloud Architect or GCP Professional DevOps Engineer certification.
• Experience with multi-cloud or hybrid cloud environments.
• Hands-on experience with serverless computing and event-driven architectures.
• Prior experience in high-traffic, distributed systems.

Site Reliability Engineering Manager (GCP) Read More »

Senior Data Engineer

Job Title: Senior Data Engineer
Type: Fulltime
Experience: 7 -12 yrs
Location: Remote
Skills: SQL , Python, ML Understanding and Open Source Frameworks
Job Description:
Create and optimize complex SQL queries for data extraction, manipulation, and reporting.
Develop robust Python scripts and applications for data processing, automation.
Collaborate with cross-functional teams to define architecture and implement best practices in cloud-native development.
Monitor and troubleshoot performance issues, ensuring reliability and scalability of cloud solutions.
Write clean, maintainable, and well-documented code following software development best practices.
Ensure data security and compliance with organizational and industry standards.
Strong hands-on expertise with SQL, including writing complex queries and optimizing database performance.
Advanced Python programming skills, including experience with libraries and frameworks relevant to data processing and integration.

Senior Data Engineer Read More »

Senior Principal Analyst (SPA) – Data Engineering

Responsibilities

The Senior Principal Analyst will be responsible for driving large multi-environment project end to end and will act more of individual contributor

– Design and develop reusable classes for ETL code pipelines and responsible for optimistic ETL framework design.

– Plan and execute the projects and be able to guide the junior folks in the team.

– Excellent presentation and communication skills, and strong team player

– Experience in working with clients, stakeholders, product owners to collect requirements and creating solutions, estimations.

Qualifications & Experience:

– 5+ years of experience solutioning and design in data & analytics projects

– Strong in Data Modelling Skills, Data Warehousing, and Architecture with ETL & SQL Skills

– Experience in handling multiple projects as Data Architect and/or Solution Architect

– 6+ years of Big Data Processing technologies such as Spark, Hadoop etc.

– 6+ years’ experience in Programming Python/Scala/Java & Linux shell scripting

– 6+ years of hands-on experience in implementing data Integration frameworks to ingest terabytes of data in batch and real-time to an analytical environment.

– 3+ years of experience in developing big data applications in Cloud (AWS/GCP/Azure and/or Snowflake)

– Deep knowledge of Database technologies such as Relational and NoSQL

– Hands on experience with ETL pipeline development and functional programming preferably with Scala, Python, Spark, and R

– Must be good in developing ETL layer for high data volume transaction processing.

– Experience with any ETL tool (Informatica/DataStage/SSIS/Talend) with Data modelling, and Data warehousing concepts

– Good to have jobs execution/debugging experience with PySpark, PyKafka classes, with combination of Docker containerization.

– Agile/Scrum methodology experience is required.

Senior Principal Analyst (SPA) – Data Engineering Read More »

MuleSoft Developer

Factspan Overview:

Factspan is a pure play data and analytics services organization. We partner with fortune 500 enterprises to build an analytics center of excellence, generating insights and solutions from raw data to solve business challenges, make strategic recommendations and implement new processes that help them succeed. With officesin Seattle, Washington and Bangalore, India; we usea global delivery model to service our customers. Our customers include industry leaders fromRetail, Financial Services, Hospitality, and technology sectors.

Job Description:

We are seeking a seasoned MuleSoft Developer with deep expertise in the MuleSoft Any point Platform and a strong focus on Data integration area. The ideal candidate will have a minimum of 6 years of hands-on experience in designing, developing, and deploying integration solutions using MuleSoft, with a proven track record in implementing real-time, high-performance, and scalable integrations.

Responsibilities:

• Overall, 7 years of IT experience with minimum 5 years relevant experience in MuleSoft development with a strong focus on data integration projects.
• Extensive knowledge of MuleSoft Any point Platform, including Any point Studio, API Manager, and Cloud Hub.
• Proficient in designing and implementing RESTful APIs and SOAP web services.
• Strong understanding of data formats like JSON, XML, and CSV, and data transformation tools and techniques.
• Experience with various integration patterns and architectural styles, such as ESB, microservices, and event-driven architecture.
• Familiarity with message brokers (e.g., JMS, RabbitMQ) and streaming technologies (e.g., Kafka).
• Solid understanding of security practices related to API development and data integration.
• Excellent problem-solving and analytical skills.
• Strong communication skills, both verbal and written.
• Ability to work collaboratively in a team environment and manage multiple tasks effectively.
• MuleSoft Certified Developer or Architect preferred.

Qualifications & Experience:

Bachelor’s degree in computer science, Information Systems, or related field

MuleSoft Developer Read More »

Scroll to Top