Introducing self-managed information sources for Amazon OpenSearch Ingestion

0
45


Enterprise clients more and more undertake Amazon OpenSearch Ingestion (OSI) to convey information into Amazon OpenSearch Service for numerous use instances. These embrace petabyte-scale log analytics, real-time streaming, safety analytics, and looking out semi-structured key-value or doc information. OSI makes it easy, with simple integrations, to ingest information from many AWS companies, together with Amazon DynamoDB, Amazon Easy Storage Service (Amazon S3), Amazon Managed Streaming for Apache Kafka (Amazon MSK), and Amazon DocumentDB (with MongoDB compatibility).

Immediately we’re asserting help for ingesting information from self-managed OpenSearch/Elasticsearch and Apache Kafka clusters. These sources can both be on Amazon Elastic Compute Cloud (Amazon EC2) or on-premises environments.

On this put up, we define the steps to get began with these sources.

Answer overview

OSI helps the AWS Cloud Improvement Equipment (AWS CDK), AWS CloudFormation, the AWS Command Line Interface (AWS CLI), Terraform, AWS APIs, and the AWS Administration Console to deploy pipelines. On this put up, we use the console to exhibit the way to create a self-managed Kafka pipeline.

Conditions

To verify OSI can join and browse information efficiently, the next circumstances ought to be met:

  • Community connectivity to information sources – OSI is mostly deployed in a public community, such because the web, or in a digital personal cloud (VPC). OSI deployed in a buyer VPC is ready to entry information sources in the identical or completely different VPC and on the web with an connected web gateway. In case your information sources are in one other VPC, frequent strategies for community connectivity embrace direct VPC peering, utilizing a transit gateway, or utilizing buyer managed VPC endpoints powered by AWS PrivateLink. In case your information sources are in your company information middle or different on-premises surroundings, frequent strategies for community connectivity embrace AWS Direct Join and utilizing a community hub like a transit gateway. The next diagram exhibits a pattern configuration of OSI working in a VPC and utilizing Amazon OpenSearch Service as a sink. OSI runs in a service VPC and creates an Elastic Community interface (ENI) within the buyer VPC. For self-managed information supply these ENIs are used for studying information from on-premises surroundings. OSI creates an VPC endpoint within the service VPC to ship information to the sink.
  • Identify decision for information sources – OSI makes use of an Amazon Route 53 resolver. This resolver routinely solutions queries to names native to a VPC, public domains on the web, and information hosted in personal hosted zones. In the event you’re are utilizing a non-public hosted zone, be sure to have a DHCP possibility set enabled, connected to the VPC utilizing AmazonProvidedDNS as area identify server. For extra data, see Work with DHCP possibility units. Moreover, you should use resolver inbound and outbound endpoints in the event you want a posh decision schemes with circumstances which might be past a easy personal hosted zone.
  • Certificates verification for information supply names – OSI helps solely SASL_SSL for transport for Apache Kafka supply. Inside SASL, Amazon OpenSearch Service helps most authentication mechanisms like PLAIN, SCRAM, IAM, GSAPI and others. When utilizing SASL_SSL, be sure to have entry to certificates wanted for OSI to authenticate. For self-managed OpenSearch information sources, be certain that verifiable certificates are put in on the clusters. Amazon OpenSearch Service doesn’t help insecure communication between OSI and OpenSearch. Certificates verification can’t be turned off. Particularly, the “insecure” configuration possibility just isn’t supported.
  • Entry to AWS Secrets and techniques Supervisor – OSI makes use of AWS Secrets and techniques Supervisor to retrieve credentials and certificates wanted to speak with self-managed information sources. For extra data, see Create and handle secrets and techniques with AWS Secrets and techniques Supervisor.
  • IAM function for pipelines – You want an AWS Identification and Entry Administration (IAM) pipeline function to jot down to information sinks. For extra data, see Identification and Entry Administration for Amazon OpenSearch Ingestion.

Create a pipeline with self-managed Kafka as a supply

After you full the conditions, you’re able to create a pipeline in your information supply. Full the next steps:

  1. On the OpenSearch Service console, select Pipelines below Ingestion within the navigation pane.
  2. Select Create pipeline.
  3. Select Streaming below Use case within the navigation pane.
  4. Choose Self managed Apache Kafka below Ingestion pipeline blueprints and select Choose blueprint.

This can populate a pattern configuration for this pipeline.

  1. Present a reputation for this pipeline and select the suitable pipeline capability.
  2. Beneath Pipeline configuration, present your pipeline configuration in YAML format. The next code snippet exhibits pattern configuration in YAML for SASL_SSL authentication:
    model: '2'
    kafka-pipeline:
      supply:
        kafka:
          acknowledgments: true
          bootstrap_servers:
            - 'node-0.instance.com:9092'
          encryption:
            kind: "ssl"
            certificates: '${{aws_secrets:kafka-cert}}'
            
          authentication:
            sasl:
              plain:
                username: '${{aws_secrets:secrets and techniques:username}}'
                password: '${{aws_secrets:secrets and techniques:password}}'
          matters:
            - identify: on-prem-topic
              group_id: osi-group-1
      processor:
        - grok:
            match:
              message:
                - '%{COMMONAPACHELOG}'
        - date:
            vacation spot: '@timestamp'
            from_time_received: true
      sink:
        - opensearch:
            hosts: ["https://search-domain-12345567890.us-east-1.es.amazonaws.com"]
            aws:
              area: us-east-1
              sts_role_arn: 'arn:aws:iam::123456789012:function/pipeline-role'
            index: "on-prem-kakfa-index"
    extension:
      aws:
        secrets and techniques:
          kafka-cert:
            secret_id: kafka-cert
            area: us-east-1
            sts_role_arn: 'arn:aws:iam::123456789012:function/pipeline-role'
          secrets and techniques:
            secret_id: secrets and techniques
            area: us-east-1
            sts_role_arn: 'arn:aws:iam::123456789012:function/pipeline-role'

  1. Select Validate pipeline and ensure there aren’t any errors.
  2. Beneath Community configuration, select Public entry or VPC entry. (For this put up, we select VPC entry).
  3. In the event you selected VPC entry, specify your VPC, subnets, and an acceptable safety group so OSI can attain the outgoing ports for the information supply.
  4. Beneath VPC attachment choices, choose Connect to VPC and select an acceptable CIDR vary.

OSI sources are created in a service VPC managed by AWS that’s separate from the VPC you selected within the final step. This choice permits you to configure what CIDR ranges OSI ought to use inside this service VPC. The selection exists so you may make positive there isn’t any tackle collision between CIDR ranges in your VPC that’s connected to your on-premises community and this service VPC. Many pipelines in your account can share similar CIDR ranges for this service VPC.

  1. Specify any optionally available tags and log publishing choices, then select Subsequent.
  2. Evaluate the configuration and select Create pipeline.

You possibly can monitor the pipeline creation and any log messages within the Amazon CloudWatch Logs log group you specified. Your pipeline ought to now be efficiently created. For extra details about the way to provision capability for the efficiency of this pipeline, see the part Really helpful Compute Items (OCUs) for the MSK pipeline in Introducing Amazon MSK as a supply for Amazon OpenSearch Ingestion.

Create a pipeline with self-managed OpenSearch as a supply

The steps for making a pipeline for self-managed OpenSearch are much like the steps for creating one for Kafka. Through the blueprint choice, select Knowledge Migration below Use case and choose Self managed OpenSearch/Elasticsearch. OpenSearch Ingestion can supply information from all variations of OpenSearch and Elasticsearch from model 7.0  to  model 7.10.

The next blueprint exhibits a pattern configuration YAML for this information supply:

model: "2"
opensearch-migration-pipeline:
  supply:
    opensearch:
      acknowledgments: true
      hosts: [ "https://node-0.example.com:9200" ]
      username: "${{aws_secrets:secret:username}}"
      password: "${{aws_secrets:secret:password}}"
      indices:
        embrace:
        - index_name_regex: "opensearch_dashboards_sample_data*"
        exclude:
          - index_name_regex: '..*'
  sink:
    - opensearch:
        hosts: [ "https://search-domain-12345567890.us-east-1.es.amazonaws.com" ]
        aws:
          sts_role_arn: "arn:aws:iam::123456789012:function/pipeline-role"
          area: "us-east-1"
        index: "on-prem-os"
extension:
  aws:
    secrets and techniques:
      secret:
        secret_id: "self-managed-os-credentials"
        area: "us-east-1"
        sts_role_arn: "arn:aws:iam::123456789012:function/pipeline-role"
        refresh_interval: PT1H

Issues for self-managed OpenSearch information supply

Certificates put in on the OpenSearch cluster have to be verifiable for OSI to connect with this information supply earlier than studying information. Insecure connections are presently not supported.

After you’re related, be certain that the cluster has ample learn bandwidth to permit for OSI to learn information. Use the Min and Max OCU setting to restrict OSI learn bandwidth consumption. Your learn bandwidth will range relying upon information quantity, variety of indexes, and provisioned OCU capability. Begin small and enhance the variety of OCUs to steadiness between out there bandwidth and acceptable migration time.

This supply is usually meant for one-time migration of information and never as steady ingestion to maintain information in sync between information sources and sinks.

OpenSearch Service domains help distant reindexing, however that consumes sources in your domains. Utilizing OSI will transfer this compute out of the area, and OSI can obtain considerably greater bandwidth than distant reindexing, thereby leading to sooner migration occasions.

OSI doesn’t help deferred replay or visitors recording as we speak; consult with Migration Assistant for Amazon OpenSearch Service in case your migration wants these capabilities.

Conclusion

On this put up, we launched self-managed sources for OpenSearch Ingestion that allow you to ingest information from company information facilities or different on-premises environments. OSI additionally helps numerous different information sources and integrations. Seek advice from Working with Amazon OpenSearch Ingestion pipeline integrations to study these different information sources.


Concerning the Authors

Muthu Pitchaimani is a Search Specialist with Amazon OpenSearch Service. He builds large-scale search functions and options. Muthu is within the matters of networking and safety, and is predicated out of Austin, Texas.

Arjun Nambiar is a Product Supervisor with Amazon OpenSearch Service. He focuses on ingestion applied sciences that allow ingesting information from all kinds of sources into Amazon OpenSearch Service at scale. Arjun is excited by large-scale distributed methods and cloud-centered applied sciences, and is predicated out of Seattle, Washington.