Saltar al contenido principal
Volver a la búsqueda

Database Analyst III

Req ID: J2249859

  • Ubicación
    Guadalajara, Jalisco, Mexico
    Penang, Isla de Penang, Malaysia
    Remoto - Ucrania, Kiev (provincia), Ukraine
  • Categoría Tecnologias de la Información
  • Publicado martes, 29 de septiembre de 2020
  • Tipo Tiempo completo

Descripción del Trabajo

JOB SUMMARY:

Responsible for the design, development and implementation of BIG data & IoT Analytics solutions within Jabil's Enterprise Information Platform. A successful candidate will be able to create, evaluate and implement plans and design proposals for high impact analytics solutions and their use involving leading edge cloud technologies and methods considering key factors such as their long-term effectiveness (service delivery and cost), practicality, technical limitations and criticality. Must be a team player who is both compassionate for learning and firm on standards to effectively deliver reporting solutions.

JOB REQUIREMENTS: 
- Minimum of 6 or more years working as Data Engineer on Advanced Analytics and Data Provisioning projects.
- Business Requirements Analysis and technical design
- Communication with key business users, Data Scientists, IT Tech leads.
- Hands on experience with BIG data processing, clean, transform and prepare data.
- Hands on experience working with Data Lakes (Hadoop, AWS S3, Azure ADLS)
- Expertise on joining datasets and creation of de-normalized views
- Experience with Data Preparation and Features Engineering as part of Machine Learning or Advance Analytics project
- Understanding of Cloud Solutions and Architecture
- Experience using Code/Version management tools.
- Some basic knowledge on Supply chain and Manufacturing process domain
 
EXPERIENCE REQUIREMENTS and KNOWLEDGE:
- Deep knowledge on AWS Data Services
- Experienced writing AWS Glue, Spark code for data processing and preparation
- Experienced working with Glue Endpoints. Step functions, Lambda
- Experienced in SageMaker notebooks using Python and PySpark to create features and write to S3 & AuroraDB
- Experienced in IoT Analytics using AWS Kinesis data streams or Kafka streaming
- Automate with CI/CD pipelines by Modularizing / generalizing with variables
- Experienced working in any of the popular ETL tools
- Experienced or Working knowledge of Talend
- Working knowledge of Snowflake
- Working knowledge of Hadoop toolset like HDFS, Hive, Impala
- Working knowledge of Linux Shell Scripting

EDUCATION:
• BS in computer Sciences or relevant experience in these areas
• Demonstrable technical experience in BIG data analytics

Image 17 (1)

¿No estás listo para aplicar? ¡Únase a la red profesional de Jabil!

Más información sobre las próximas oportunidades profesionales y eventos Jabil

Únete ahora