Under Development Features: Talent Bank/Pool, VMS Intigration, Analytics, Social Integration, Reports, API Integration, Resource and Timesheets Management, Company Admin

PySpark/ETL Developer (1328 views)

Carlsbad, CA
April 22, 2020

 **** Direct Client Requirement ****

Title: PySpark/ETL Developer

Location: Carlsbad, CA

Duration: 6+ Months

Rate: DOE

Interview Type:  Skype 

Work Status:  Successful applicants must be legally authorized to work in the U.S.

Job Type: ,W2,

Experience: 5 YEARS

 

I would prefer local candidate .

 

Description:

  • Design, develop, test, deploy, support, enhance data integration solutions seamlessly to connect and integrate client enterprise systems in our Enterprise Data Platform.
  • Innovate for data integration in Apache Spark-based Platform to ensure the technology solutions leverage cutting edge integration capabilities.
  • Facilitate requirements gathering and process mapping workshops, review business/functional requirement documents, author technical design documents, testing plans and scripts.
  • Assist with implementing standard operating procedures, facilitate review sessions with functional owners and end-user representatives, and leverage technical knowledge and expertise to drive improvements.

Primary Skills:

  • 4+ years working experience in data integration and pipeline development.
  • BS degree in CS, CE or EE.
  • 2+ years of Experience with AWS Cloud on data integration with Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in S3, Redshift, RDS, MongoDB/DynamoDB ecosystems
  • Strong real-life experience in python development especially in pySpark in AWS Cloud environment.
  • Design, develop test, deploy, maintain and improve data integration pipeline.
  • Experience in Python and common python libraries.
  • Strong analytical experience with database in writing complex queries, query optimization, debugging, user defined functions, views, indexes etc.
  • Strong experience with source control systems such as Git, Bitbucket, and Jenkins build and continuous integration tools.
  • Databricks or Apache Spark Experience is a plus.

Non-technical Qualifications:

  • Highly self-driven, execution-focused, with a willingness to do “what it takes” to deliver results as you will be expected to rapidly cover a considerable amount of demands on data integration
  • Understanding of development methodology and actual experience writing functional and technical design specifications.
  • Excellent verbal and written communication skills, in person, by telephone, and with large teams.
  • Strong prior technical, development background in either data Services or Engineering
  • Demonstrated experience resolving complex data integration problems;
  • Must be able to work cross-functionally. Above all else, must be equal parts data-driven and results-driven.

Primary Skills: Python/PySpark

Thanks

varun/Siva

varun@Sohanit.com/siva@sohanit.com

PH:402-241-9628/402-241-9606

Apply here or Please send to resumes@sohanit.com

Position Keywords: Bitbucket , RDS , query , Carlsbad , Glue , Jenkins , Redshift , Apache Spark , Lambda , S3 , Python , EMR , data integration , queries , Kinesis , Kafka , AWS , DynamoDB , Git , data Services , MongoDB , Python/PySpark , communication skills , Enterprise Data

Pay Rate: DOE

Job Duration: 6 Months

% Travel Required: None

Job Posted by: Consulting Services

Job ID: NTT DATA

Work Authorization: Successful applicants must be legally authorized to work in the U.S

Don't have time now?
Get a reminder in your inbox