Data Engineer – CVS Remote Jobs
Job Description
Job Type: Data Engineer from home
Location: Texas work from home
Company: CVS Health
Dedicated to offering individuals, employers, healthcare professionals, producers, and others with cutting-edge benefits, goods, and services, CVS Health is a leader in healthcare innovation. The industry is changing, and there are countless potential to bring about significant improvement in the health of millions of people. Consider developing your career with a Fortune 5 healthcare leader if you are passionate about making a difference in the world through the work you do, being recognized for your talent and expertise in solving complex and difficult clinical data problems while contributing to the success of key strategic initiatives!
Synopsis of Position:
Building and delivering best-in-class clinical data projects with the goal of advancing best-in-class solutions will be your responsibility as a member of the Data and Analytics department. To provide practical solutions for our partners, you will work with analytical and business partners from the product strategy, program management, IT, data strategy, and predictive analytics departments.
Responsibilities:
- Develop efficient frameworks for processing clinical data using the Google Cloud Platform, GCP Shared Services, such as Big Query and the HL7 FHIR store.
- For the onboarding of clinical data from multiple data sources formatted in various industry standards, design and develop clinical data pipelines incorporating ingestion, harmonization, and consumption frameworks (FHIR, C-CDA, HL7 V2, JSON, XML, etc.).
- Analyze health care data, profile raw/source data to gain useful insights, and document data requirements to help the process of integrating additional data sources.
- To enable clinical data collection, storage, processing, translation, aggregation, and dissemination over heterogeneous channels, create cutting-edge data pipelines supporting both batch and real-time streams.
- Create design requirements for the data items used in healthcare and the supporting data processing logic.
- Develop complicated transformations, notification engines, analytical engines, and self-service analytics proof of ideas as a leader in innovation and research
- To enable big data and batch/real-time analytical solutions that make use of emerging technology, adopt a DevOps attitude.
Pay Scale
For this position, the usual pay scale is as follows: Minimum: 115,000 Maximum: 230,000
Please remember that this range represents the salary range for every position in the job grade that this position belongs to. The location is just one of several variables that will be considered when determining the actual compensation offer.
Requirements:
Required credentials
- Architecting, designing, and developing enterprise data applications and analytics solutions with at least 8+ years of practical expertise in the healthcare industry
- Experience with the Google Cloud Platform and its shared services, such as Cloud Dataflow, Cloud Storage, Pub/Sub, Cloud Composer, Big Query, and Health API, for at least two years (FHIR store)
- 3+ years of expertise developing applications using Python, Java, Spark, Airflow, and Kafka
- Experience with traditional RDBMS (such as Teradata, Oracle, DB2), SQL, HQL (Hive Query Language), Python, Unix Shell scripting, JSON, and XML, as well as 5+ years of hands-on work with Big Data technologies
- Five or more years of expertise in data engineering, working with patient, provider, clinical, utilization management, pharmacy, specialty drugs, and claims data
- working with technologies to automate CI/CD pipelines for at least two years (e.g., Jenkins, GIT, Control-M)
- Excellent communication and articulation abilities are required.
- Working in a dynamic environment, creating and owning priorities that change in accordance with our bigger objectives. Clarity can be brought to uncertainty while maintaining an open mind to fresh facts that might cause you to reconsider your opinion.
- Should be very knowledgeable about healthcare data, including clinical data in both proprietary and conventional formats.
Recommended Requirements
- An extra advantage is having knowledge of creating self-service analytics dashboards using Tableau/Power BI.
- Interoperability, HL7 clinical standards, and formats expertise of two years or more
- 1+ years of Big Query experience with GCP
- 1+ years of experience working on SAFe-model-executed projects
Education
- A bachelor’s degree in one of the following fields is required. Business statistics, computer science, engineering, and business information systems.
- Master’s degrees in one of the aforementioned fields are preferred.