Data Engineer – CVS Remote Jobs
Job Description
Job Type: Data Engineer from home
Location: GA, TX work from home
Company: CVS Health
Responsibilities:
You will work with business partners to identify opportunities to leverage big data technologies in support of Pharmacy Personalization, using a standardized set of tools and infrastructure to make analytics faster, more insightful, and more efficient. batch and real-time big data and cloud platforms that enable the collection, storage, modeling, and analysis of massive data sets from various channels. define and maintain data architecture, with a focus on using technology to enable business solutions assess and provide recommendations on business relevance, with appropriate timing and deployment perform architecture design, data modeling, and implement CVS Big Data platforms and analytic applications bring a DevOps mindset to enable big data and batch/real-time analytical solutions that leverage emerging technologies develop prototypes and proof of concepts for the selected solutions, and implement complex big data projects apply a creative mindset to a focus on collecting, parsing, managing, and automating data feedback loops in support of business innovation.
Location: Available to work 100% remotely or in-person every Tuesday and Wednesday in Chicago, Boston/Wellesley, Dallas/Irving, or Woonsocket, RI.
Pay Scale
The typical salary range for this position is: The minimum is 115,000, and the maximum is 230,000.
Please keep in mind that this range represents the average pay for all positions in the job grade in which this position is located. The actual salary offer will consider a variety of factors, including location.
Requirements:
Qualifications Required
- Bachelor’s Degree in Computer Science, Engineering, Statistics, Physics, Math, or a related field; or Master’s Degree with 3+ years of experience in:
- Strong SQL and Python skills, with 3+ years of hands-on coding experience in both
- Experience with data analysis and exploration
- Knowledge of big data frameworks (i.e. Hadoop and Spark)
- Knowledge of cloud-based platforms (i.e. Azure, GPC, AWS)
- Snowflake experience and hands-on query tuning/optimization are required.
- Working experience in a multi-developer environment with version control (i.e. Git)
- Knowledge of pipeline orchestration tools (i.e. Airflow, Azure Data Factory)
- Knowledge of real-time and streaming technology (i.e. Azure Event Hubs, Azure Functions Kafka, Spark Streaming)
- Python REST API/Microservice development experience
- Experience with app deployment and scaling in a containerized environment (i.e. Kubernetes, AKS)
- Experience with technical problem solving and system architecture design Collaboration with other technical teams (e.g., data ingestion, data science, operational systems) to align priorities and achieve deliverables
- Experience with coding standards, code reviews, and mentoring junior developers.
- Experience in project management by mentoring junior technical developers
Qualifications Preferred
- Prior healthcare experience and domain knowledge are required.
- DevOps best practices and CICD exposure and comprehension (i.e. Jenkins)
- Containerization exposure/understanding (i.e. Kubernetes, Docker)
Education
Bachelor’s degree in computer science, engineering, statistics, physics, mathematics, or a related field is required, or equivalent experience.
Master’s Degree with coursework in advanced algorithms, computing mathematics, data structures, and so on is preferred.