Lead Data Engineer – CVS Remote Jobs
Job Description
Job Type: Lead Data Engineer from home
Location: Texas work from home
Company: CVS Health
Building and delivering best-in-class clinical data projects with the goal of advancing best-in-class solutions will be your responsibility as a member of the Data and Analytics department. To provide practical solutions for our partners, you will work with analytical and business partners from the product strategy, program management, IT, data strategy, and predictive analytics departments.
Responsibilities:
- Build efficient frameworks for processing clinical data using the Google Cloud Platform, GCP Shared Services, such as Big Query and the HL7 FHIR store.
- For onboarding clinical data from diverse data sources formatted in different industry standards (FHIR, C-CDA, HL7 V2, JSON, XML, etc.), design and develop clinical data pipelines including ingestion, harmonization, and consumption frameworks.
- Analyze health care data, profile raw/source data to gain useful insights, and document data requirements to help the process of integrating additional data sources.
- To enable clinical data collection, storage, processing, translation, aggregation, and dissemination over heterogeneous channels, create cutting-edge data pipelines supporting both batch and real-time streams.
- Create design requirements for the data items used in healthcare and the supporting data processing logic.
- Develop complicated transformations, notification engines, analytical engines, and self-service analytics proof of ideas as a leader in innovation and research
- To enable big data and batch/real-time analytical solutions that make use of emerging technology, adopt a DevOps attitude.
Pay Scale
The usual pay scale for this position is as follows: 115,000 minimum; 230,000 maximum
Please remember that this range represents the salary range for every position in the job grade that this position belongs to. The location is just one of several variables that will be considered when determining the actual compensation offer.
Requirements:
Required credentials
- Architecting, designing, and developing enterprise data applications and analytics solutions with at least 8+ years of practical expertise in the healthcare industry
- Experience with the Google Cloud Platform and its Shared Services, including Big Query, Cloud Composer, Cloud Storage, Cloud Dataflow, and the Health Care API (FHIR store), for at least two years.
- 3+ years of expertise developing applications using Python, Java, Spark, Airflow, and Kafka
- Experience with traditional RDBMS (such as Teradata, Oracle, DB2), SQL, HQL (Hive Query Language), Python, Unix Shell scripting, JSON, and XML, as well as 5+ years of hands-on work with Big Data technologies
- Five or more years of expertise in data engineering, working with patient, provider, clinical, utilization management, pharmacy, specialty drugs, and claims data
- 2+ years of experience working with CI/CD pipeline automation systems, such as Jenkins, GIT, and Control-M
- Excellent communication and articulation abilities are required.
- Working in a dynamic environment, creating and owning priorities that change in accordance with our bigger objectives. Clarity can be brought to uncertainty while maintaining an open mind to fresh facts that might cause you to reconsider your opinion.
- should be very knowledgeable about healthcare data, including clinical data in both proprietary and conventional formats.
Preferred Requirements
- An extra advantage is having knowledge of creating self-service analytics dashboards using Tableau/Power BI.
- Interoperability, HL7 clinical standards, and formats expertise of two years or more
- 1+ years of Big Query experience with GCP
- 1+ years of experience working on SAFe-model-executed projects
Education
- A bachelor’s degree in one of the following fields is required. Business statistics, computer science, engineering, and business information systems.
- Master’s degrees in one of the aforementioned fields are preferred.