GDIT

  • Software Engineer/Data Scientist - Polygraph

    Job Locations USA-VA-Herndon
    Job ID
    2018-38282
    Number of Positions
    1
    Job Function
    Information Technology
    Security Clearance Level
    Top Secret/SCI with Polygraph
    Full/Part Time
    Full Time
  • Job Description

    Designs and defines system architecture for new or existing computer systems.

     

    Work with a team performing entity correlation for critical systems. Support the migration from Big Data processing systems from local hardware to a cloud-based computing environment. Modernize legacy software and data analysis tools, and assist with the conversion of complex Java applications and shell scripts to a spark and Python-based toolset. Identify methods for more precise entity correlation and determine the value of and impact of new data sets.  Evaluate the benefits and drawbacks of new and emerging entity correlation, cloud computing and big data technologies. Support special projects and new requirements as needed.

    • Analyzes existing software applications and tools, and recommends new technologies and methodologies to improve system performance and usability.
    • Designs, develops, tests, and deploys new software tools that extend or replace existing capabilities.
    • Modernizes legacy Java code as part of the migration to a cloud-based computing environment.
    • Designs and implements cloud-native data processing capabilities using tools such as Spark, Hive Hadoop, Flink, NiFi, and other big data software.
    • Supports the design and development of a streaming data processing and entity correlation architecture.
    • Assists with the transition of existing batch processing systems to a near-real time streaming pipeline.
    • Maintains current knowledge of relevant technology as assigned.
    • Identifies and resolves complex hardware/software interface issues.
    • Test and documents performance benchmarks and metrics for developed software.

    Qualifications:

    • 5+ years of software development experience with Java and Python.
    • Experience with Spark, Hive, Hadoop, and other big data processing technologies.
    • Experience with Linux-based systems and shell scripting.
    • Experience with application development and deployments with Amazon Web Services.
    • Experience analyzing large volumes of complex structured and unstructured data.


    Desired skills:

    • Experience developing streaming data processing pipelines.
    • Familiarity with Cloudera platform and service offerings.
    • Background in data analysis, statistical analysis, machine learning, or data science.
    Experience with entity resolution products or a background in data matching theory and technologies
     
     

    1. Performs complex systems development and design work that may include logic design, I/O design, firmware development, model formulation, manufacturing and development cost projections, computer architecture analysis and design, and analog or digital systems engineering.

     

    2. Performs systems modeling, simulation, and analysis.

     

    3. Plans upgrades of operating systems and designs systems enhancements.

     

    4. Develops documentation on new or existing systems.

     

    5. Develops and conducts tests to ensure systems meet documented user requirements.

     

    6. Identifies, analyzes, and resolves system problems.

     

    7. Provides system/equipment/specialized training and technical guidance.

     

    8. Determines system specifications, input/output processes, and working parameters for hardware/software compatibility.

     

    9. Provides guidance and work leadership to less-experienced systems engineers and may have supervisory responsibility.

     

    10. May serve as a technical team or task lead.

     

    11. Serves as liaison with clients, participating in meetings to ensure client needs are met.

     

    12. Maintains current knowledge of relevant technology as assigned.

     

    13. Participates in special projects as required.

    Education

    1. Bachelors Degree in Computer Science, Engineering or a related technical discipline.

     

    2. Masters Degree preferred.

    Qualifications

    10-15 years of related systems engineering experience.

     

    Work with a team performing entity correlation for critical systems. Support the migration from Big Data processing systems from local hardware to a cloud-based computing environment. Modernize legacy software and data analysis tools, and assist with the conversion of complex Java applications and shell scripts to a spark and Python-based toolset. Identify methods for more precise entity correlation and determine the value of and impact of new data sets.  Evaluate the benefits and drawbacks of new and emerging entity correlation, cloud computing and big data technologies. Support special projects and new requirements as needed.

    • Analyzes existing software applications and tools, and recommends new technologies and methodologies to improve system performance and usability.
    • Designs, develops, tests, and deploys new software tools that extend or replace existing capabilities.
    • Modernizes legacy Java code as part of the migration to a cloud-based computing environment.
    • Designs and implements cloud-native data processing capabilities using tools such as Spark, Hive Hadoop, Flink, NiFi, and other big data software.
    • Supports the design and development of a streaming data processing and entity correlation architecture.
    • Assists with the transition of existing batch processing systems to a near-real time streaming pipeline.
    • Maintains current knowledge of relevant technology as assigned.
    • Identifies and resolves complex hardware/software interface issues.
    • Test and documents performance benchmarks and metrics for developed software.

    Qualifications:

    • 5+ years of software development experience with Java and Python.
    • Experience with Spark, Hive, Hadoop, and other big data processing technologies.
    • Experience with Linux-based systems and shell scripting.
    • Experience with application development and deployments with Amazon Web Services.
    • Experience analyzing large volumes of complex structured and unstructured data.


    Desired skills:

    • Experience developing streaming data processing pipelines.
    • Familiarity with Cloudera platform and service offerings.
    • Background in data analysis, statistical analysis, machine learning, or data science
    • Experience with entity resolution products or a background in data matching theory and technologies
     
     

    As a trusted systems integrator for more than 50 years, General Dynamics Information Technology provides information technology (IT), systems engineering, professional services and simulation and training to customers in the defense, federal civilian government, health, homeland security, intelligence, state and local government and commercial sectors.With approximately 32,000 professionals worldwide, the company delivers IT enterprise solutions, manages large-scale, mission-critical IT programs and provides mission support services.GDIT is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status, or any other protected class.

     

    #DPOST

    #CJPOST

    #SWDevIC

    #ISDCJ

    #ERP

    Options

    Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
    Share on your newsfeed!

    Need help finding the right job?

    We can recommend jobs specifically for you! Click here to get started.