anonymizer/data/HUB_nohtml.txt

3458 lines
2.0 MiB
Raw Permalink Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

[
{
"title": "HUB Overview",
"pageID": "164470108",
"pageLink": "/display/GMDM/HUB+Overview",
"content": "MDM Integration services provide services for clients using MDM systems (Reltio or Nucleus 360) in following fields:As abstraction layer providing API for MDM data management.Delivering common processes that are hiding complexity of interaction with Reltio API.Enhancing Reltio functionality by data quality validating and through cleaning services.Extending data protection by limiting clients' access.Allowing to publish MDM data to multiple clients using event streaming and batch mode.MDM Integration Services consist of:Integration Gateway providing services for data handling in Reltio (storing and accessing entities directly).Publishing Hub being responsible for publishing OV profiles to consumers.The MDM HUB ecosystem is presented at the picture below.   "
},
{
"title": "Modules",
"pageID": "164470022",
"pageLink": "/display/GMDM/Modules",
"content": ""
},
{
"title": "Direct Channel",
"pageID": "164469882",
"pageLink": "/display/GMDM/Direct+Channel",
"content": "DescriptionDirect channel exposes unified REST API interface to update/search profiles in MDM systems. The diagram below shows the logical architecture of the Direct Channel module. Logical architectureComponentsComponentSubcomponentDescriptionAPI GatewayKong API Gateway components playing the role of proxAuthentication engineKong module providing client authentication servicesManager/Orchestratorjava microservice orchestrating API callsData Quality Enginequality service validating data sent to Reltio Authorization Engineauthorize client access to MDM resourcesMDM routing engineroute calls to MDM systemsTransaction Loggerregisters API calls in EFK service for tracing reasons. Reltio Adapterhandles communication with Reltio MDM systemNucleus Adapterhandle communication with Nucleus MDM systemHUB StoreMongoDB database plays the role of persistence store for MDM HUB logicAPI Routerrouting requests to regional MDM Hub servicesFlowsFlowDescriptionCreate/Update HCP/HCO/MCOCreate or Update HCP/HCO/MCO entitySearch EntitySearch entityGet EntityRead entityRead LOVRead LOVValidate HCPValidate HCP"
},
{
"title": "Streaming channel",
"pageID": "164469812",
"pageLink": "/display/GMDM/Streaming+channel",
"content": "DescriptionStreaming channel distributes MDM profile updates through KAFKA topics in near real-time to consumers.  Reltio events generate on profile changes are sent via AWS SQS queue to MDM HUB.MDM HUB enriches events with profile data and dedupes them. During the process, callback service process data (for example: calculate ranks and hco names, clean unused topics) and updates profile in Reltio with the calculated values.   Publisher distributes events to target client topics based on the configured routing rules.MDM Datamart built-in Snowflake provides SQL access to up to date MDM data in both the object and the relational model. Logical architectureComponentsComponentDescriptionReltio subscriberConsume events from ReltioCallback serviceTrigger callback actions on incoming events for example calculated rankingsDirect ChannelOrchestrates Reltio updates triggered by callbacksHUB StoreKeeps MDM data historyReconciliation serviceReconcile missing eventsPublisherEvaluates routing rules and publishes data do downstream consumersSnowflake Data MartExposes MDM data in the relation modelKafka ConnectSends data to Snowflake from KafkaEntity enricherEnrich events with full data retrieved from ReltioFlowsFlowDescriptionReltio events streamingDistribute Reltio MDM data changes to downstream consumers in the streaming modeNucleus events streamingDistribute Nucleus MDM data changes to downstream consumers in the streaming modeSnowflake: Events publish flowDistribute Reltio MDM data changes to Snowflake DM"
},
{
"title": "Java Batch Channel",
"pageID": "164469814",
"pageLink": "/display/GMDM/Java+Batch+Channel",
"content": "DescriptionJava Batch Channel is the set of services responsible to load file extract delivered by the external source to Reltio. The heart of the module is file loader service aka inc-batch-channel that maps flat model to Reltio model and orchestrates the load through asynchronous interface manage by Manager. Batch flows are managed by Apache Airflow scheduler.Logical architectureComponentsApache Airflow - batch flows scheduler and orcherstartor.File loader aka inc-batch-channel - maps files to Reltio model  and orchestrate profiles loads Manager/Orchestrator - java microservice orchestrating API calls FlowsIncremental batches - generic flow for loading source data from flat files into Reltio"
},
{
"title": "ETL Batch Channel",
"pageID": "164469835",
"pageLink": "/display/GMDM/ETL+Batch+Channel",
"content": "DescriptionETL Batch channel exposes REST API  for ETL components like Informatica and manages a loading process in an asynchronous way.With its own cache based on Hub Store, it supports full loads providing a delta detection logic.Logical architectureComponentsBatch service - exposes REST API for ETL platforms to load batch data into Reltio and controls the loading process.Hub Store - a registry of batch loads and a cache to handle delta detection.Manager/Orchestrator - java microservice orchestrating API calls into Reltio and providing validation and data protection services. FlowsETL batch flow -  ageneric flow for loading source data with ETL tools like Informatica into Reltio"
},
{
"title": "Environments",
"pageID": "164470172",
"pageLink": "/display/GMDM/Environments",
"content": "Reltio Export IPsEnvironmentIPsReltio Team commentEMEA NON-PRODEMEA PROD- ●●●●●●●●●●●●- ●●●●●●●●●●●●- ●●●●●●●●●●●●are available across all EMEA environmentsAPAC NON-PRODAPAC PROD- ●●●●●●●●●●●- ●●●●●●●●●●●●●●- ●●●●●●●●●●●●●are available across all APAC environmentsGBLUS NON-PRODGBLUS PROD- ●●●●●●●●●●●●●- ●●●●●●●●●●●- ●●●●●●●●●●●●● for the dev/test and 361 tenants, the IPs can be used by any of the environments.AMER NON-PRODAMER PRODThe AMER tenants use the same access points as the US"
},
{
"title": "AMER",
"pageID": "196878948",
"pageLink": "/display/GMDM/AMER",
"content": "ContactsTypeContactCommentSupported MDMHUB environmentsDLDL-ADL-ATP-GLOBAL_MDM_RELTIO@COMPANY.comSupports Reltio instancesGBLUS - Reltio only"
},
{
"title": "AMER Non PROD Cluster",
"pageID": "196878950",
"pageLink": "/display/GMDM/AMER+Non+PROD+Cluster",
"content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-nprod-amer10.9.64.0/1810.9.0.0/18https://pdcs-som1d.COMPANY.comEKS over EC2us-east-1~60GB per node,6TBx2 replicated Portworx volumesKong, Kafka, Mongo, Prometheus, MDMHUB microservicesoutbound and inboundNon PROD - backend NamespaceComponentPod nameDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongamer-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsamer-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace amer-backendamer-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsamer-backendMongomongo-0Mongologsamer-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace amer-backendamer-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace amer-backendamer-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1EFK - elasticsearchkubectl logs {{pod name}} --namespace amer-backendamer-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace amer-backendmonitoringCadvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoringamer-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace amer-backendamer-backendMongo exportermongo-exporter-*mongo metrics exporter---amer-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace amer-backendamer-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace amer-backendamer-backendSnowflake connectoramer-dev-mdm-connect-cluster-connect-*amer-qa-mdm-connect-cluster-connect-*amer-stage-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace amer-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-amer-dev-*monitoring-jdbc-snowflake-exporter-amer-stage-*monitoring-jdbc-snowflake-exporter-amer-stage-*Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoringamer-backendAkhqakhq-*Kafka UIlogsCertificates Wed Aug 31 21:57:19 CEST 2016 until: Sun Aug 31 22:07:17 CEST 2036ResourceCertificate LocationValid fromValid to Issued ToKibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/nprod/namespaces/kong/config_files/certsThu, 13 Jan 2022 14:13:53 GMTTue, 10 Jan 2023 14:13:53 GMThttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/nprod/namespaces/amer-backend/secrets.yaml.encryptedJan 18 11:07:55 2022 GMTJan 18 11:07:55 2024 GMTkafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094Setup and check connections:Snowflake - managing service accounts - EMEA Snowflake Access"
},
{
"title": "AMER DEV Services",
"pageID": "196878953",
"pageLink": "/display/GMDM/AMER+DEV+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-devPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-amer-devKafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://gblmdmhubnprodamrasp100762HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-amer-dev/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/DB NameCOMM_AMER_MDM_DMART_DEV_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_AMER_MDM_DMART_DEV_DEVOPS_ROLEGrafana dashboardsResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_dev&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_dev&var-topic=All&var-node=1Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprodJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_dev&var-component=managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_dev&var-interval=$__auto_interval_intervalKibana dashboardsResource NameEndpointKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (DEV prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-dev/swagger-ui/index.html?configUrl=/api-gw-spec-amer-dev/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-dev/swagger-ui/index.html?configUrl=/api-batch-spec-amer-dev/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.comComponents & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsamer-devManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableamer-devBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsamer-devApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsamer-devSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsamer-devEnrichermdmhub-entity-enricher-*Reltio events enricherlogsamer-devCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsamer-devPublishermdmhub-event-publisher-*Events publisherlogsamer-devReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsClientsETL - COMPANY (GBLUS)MDM SystemsReltioDEV - wn60kG248ziQSMWResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/dev_wJmSQ8GWI8Q6Fl1Reltiohttps://dev.reltio.com/ui/wJmSQ8GWI8Q6Fl1https://dev.reltio.com/reltio/api/wJmSQ8GWI8Q6Fl1Reltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/dyzB7cAPhATUslEInternal ResourcesResource NameEndpointMongomongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.comMigrationThe amer dev is the first environment that was migrated from old ifrastructure (EC2 based) to a new one - Kubernetes based. The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with amer dev has to use new endpoints.DescriptionOld endpointNew endpointManager APIhttps://amraelp00010074.COMPANY.com:8443/dev-exthttps://gbl-mdm-hub-amer-nprod.COMPANY.com:8443/dev-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-devBatch Service APIhttps://amraelp00010074.COMPANY.com:8443/dev-batch-exthttps://gbl-mdm-hub-amer-nprod.COMPANY.com:8443/dev-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-amer-devConsul APIhttps://amraelp00010074.COMPANY.com:8443/v1https://gbl-mdm-hub-amer-nprod.COMPANY.com:8443/v1https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1Kafkaamraelp00010074.COMPANY.com:9094kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094"
},
{
"title": "AMER QA Services",
"pageID": "228921283",
"pageLink": "/display/GMDM/AMER+QA+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-qaPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-amer-qaKafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://gblmdmhubnprodamrasp100762HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-amer-qa/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/DB NameCOMM_AMER_MDM_DMART_QA_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_AMER_MDM_DMART_QA_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_qa&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_qa&var-topic=All&var-node=1Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprodJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_qa&var-component=mdm-managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_nprod&var-interval=$__auto_interval_intervalResource NameEndpointKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (QA prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-qa/swagger-ui/index.html?configUrl=/api-gw-spec-amer-qa/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-qa/swagger-ui/index.html?configUrl=/api-batch-spec-amer-qa/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.comComponents & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsamer-qaManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableamer-qaBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsamer-qaApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsamer-qaSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsamer-qaEnrichermdmhub-entity-enricher-*Reltio events enricherlogsamer-qaCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsamer-qaPublishermdmhub-event-publisher-*Events publisherlogsamer-qaReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsClientsETL - COMPANY (GBLUS)MDM SystemsReltioDEV - wn60kG248ziQSMWResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_805QOf1Xnm96SPjReltiohttps://test.reltio.com/ui/805QOf1Xnm96SPjhttps://test.reltio.com/reltio/api/805QOf1Xnm96SPjReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/805QOf1Xnm96SPjInternal ResourcesResource NameEndpointMongomongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com/reltio_amer-qa:27017Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com"
},
{
"title": "AMER STAGE Services",
"pageID": "228921315",
"pageLink": "/display/GMDM/AMER+STAGE+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-stagePing Federatehttps://stgfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-amer-stageKafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://gblmdmhubnprodamrasp100762HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-amer-stage/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/DB NameCOMM_AMER_MDM_DMART_STG_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_AMER_MDM_DMART_STG_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_stage&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_stage&var-topic=All&var-node=1Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprodJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_stage&var-component=mdm-managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_nprod&var-interval=$__auto_interval_intervalResource NameEndpointKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (STAGE prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-stage/swagger-ui/index.html?configUrl=/api-gw-spec-amer-stage/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-stage/swagger-ui/index.html?configUrl=/api-batch-spec-amer-stage/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.comComponents & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsamer-stageManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableamer-stageBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsamer-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsamer-stageSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsamer-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogsamer-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsamer-stagePublishermdmhub-event-publisher-*Events publisherlogsamer-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsClientsETL - COMPANY (GBLUS)MDM SystemsReltioDEV - wn60kG248ziQSMWResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_K7I3W3xjg98Dy30Reltiohttps://test.reltio.com/ui/K7I3W3xjg98Dy30https://test.reltio.com/reltio/api/K7I3W3xjg98Dy30Reltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/K7I3W3xjg98Dy30Internal ResourcesResource NameEndpointMongomongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com/reltio_amer-stage:27017Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com"
},
{
"title": "GBLUS-DEV Services",
"pageID": "234701562",
"pageLink": "/display/GMDM/GBLUS-DEV+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-devPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gblus-devKafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://gblmdmhubnprodamrasp100762HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-gblus-dev/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.comDB NameCOMM_GBL_MDM_DMART_DEVDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_DEV_MDM_DMART_DEVOPS_ROLEGrafana dashboardsResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gblus_dev&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gblus_dev&var-topic=All&var-node=1&var-instance=amraelp00007335.COMPANY.com:9102Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprodJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gblus_dev&var-component=&var-instance=All&var-node=Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_dev&var-interval=$__auto_interval_intervalResource NameEndpointKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (DEV prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gblus-dev/swagger-ui/index.html?configUrl=/api-gw-spec-gblus-dev/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-gblus-dev/swagger-ui/index.html?configUrl=/api-batch-spec-gblus-dev/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.comComponents & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsgblus-stageManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availablegblus-stageBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsgblus-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsgblus-stageSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsgblus-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogsgblus-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsgblus-stagePublishermdmhub-event-publisher-*Events publisherlogsgblus-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsClientsETL - COMPANY (GBLUS)MDM SystemsReltioDEV(gblus_dev) - sw8BkTZqjzGr7hnResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/dev_sw8BkTZqjzGr7hnReltiohttps://dev.reltio.com/ui/sw8BkTZqjzGr7hnhttps://dev.reltio.com/reltio/api/sw8BkTZqjzGr7hnReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/%s/wq2MxMmfTUCYk9kInternal ResourcesResource NameEndpointMongomongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.comMigrationThe following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with gblus dev has to use new endpoints.DescriptionOld endpointNew endpointManager APIhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/dev-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-devBatch Service APIhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/dev-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-devConsul APIhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/v1https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1Kafkaamraelp00007335.COMPANY.com:9094kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094"
},
{
"title": "GBLUS-QA Services",
"pageID": "234701566",
"pageLink": "/display/GMDM/GBLUS-QA+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-qaPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gblus-qaKafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://gblmdmhubnprodamrasp100762HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-gblus-qa/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/DB NameCOMM_GBL_MDM_DMART_QADefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_QA_MDM_DMART_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_qa&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gblus_qa&var-topic=All&var-instance=All&var-node=Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprodJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gblus_qa&var-component=mdm-managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_nprod&var-interval=$__auto_interval_intervalResource NameEndpointKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (QA prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gblus-qa/swagger-ui/index.html?configUrl=/api-gw-spec-gblus-qa/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-gblus-qa/swagger-ui/index.html?configUrl=/api-batch-spec-gblus-qa/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.comComponents & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsgblus-stageManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availablegblus-stageBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsgblus-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsgblus-stageSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsgblus-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogsgblus-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsgblus-stagePublishermdmhub-event-publisher-*Events publisherlogsgblus-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsClientsETL - COMPANY (GBLUS)MDM SystemsReltioQA(gblus_qa) - rEAXRHas2ovllvTSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_rEAXRHas2ovllvTReltiohttps://test.reltio.com/ui/rEAXRHas2ovllvThttps://test.reltio.com/reltio/api/rEAXRHas2ovllvTReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/%s/u78Dh9B87sk6I2vInternal ResourcesResource NameEndpointMongomongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com/reltio_amer-qa:27017Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.comMigrationThe following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with gblus qa has to use new endpoints.DescriptionOld endpointNew endpointManager APIhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/qa-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-qaBatch Service APIhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/qa-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-qaConsul APIhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/v1https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1Kafkaamraelp00007335.COMPANY.com:9094kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094"
},
{
"title": "GBLUS-STAGE Services",
"pageID": "243863074",
"pageLink": "/display/GMDM/GBLUS-STAGE+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-stagePing Federatehttps://stgfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gblus-stageKafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://gblmdmhubnprodamrasp100762HUB UIhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ui-gblus-stage/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/DB NameCOMM_GBL_MDM_DMART_STGDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_STG_MDM_DMART_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gblus_stage&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gblus_stage&var-topic=All&var-node=1Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_nprodJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_stage&var-component=mdm-managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_nprod&var-interval=$__auto_interval_intervalResource NameEndpointKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com (STAGE prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gblus-stage/swagger-ui/index.html?configUrl=/api-gw-spec-gblus-stage/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-gblus-stage/swagger-ui/index.html?configUrl=/api-batch-spec-gblus-stage/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-amer-nprod-gbl-mdm-hub.COMPANY.comComponents & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsgblus-stageManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availablegblus-stageBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsgblus-stageApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsgblus-stageSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsgblus-stageEnrichermdmhub-entity-enricher-*Reltio events enricherlogsgblus-stageCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsgblus-stagePublishermdmhub-event-publisher-*Events publisherlogsgblus-stageReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsClientsETL - COMPANY (GBLUS)MDM SystemsReltioSTAGE(gblus_stage) - 48ElTIteZz05XwTSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_48ElTIteZz05XwTReltiohttps://test.reltio.com/ui/48ElTIteZz05XwThttps://test.reltio.com/reltio/api/48ElTIteZz05XwTReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/%s/5YqAPYqQnUtQJqpInternal ResourcesResource NameEndpointMongomongodb://mongo-amer-nprod-gbl-mdm-hub.COMPANY.com/reltio_amer-stage:27017Kafkakafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-amer-nprod-gbl-mdm-hub.COMPANY.com"
},
{
"title": "AMER PROD Cluster",
"pageID": "234698165",
"pageLink": "/display/GMDM/AMER+PROD+Cluster",
"content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-prod-amer10.9.64.0/1810.9.0.0/18https://pdcs-drm1p.COMPANY.comEKS over EC2us-east-1~60GB per node,6TBx3 replicated Portworx volumesKong, Kafka, Mongo, Prometheus, MDMHUB microservicesoutbound and inboundPROD - backend NamespaceComponentPod nameDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongamer-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsamer-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace amer-backendamer-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsamer-backendMongomongo-0Mongologsamer-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace amer-backendamer-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace amer-backendamer-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1EFK - elasticsearchkubectl logs {{pod name}} --namespace amer-backendamer-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace amer-backendmonitoringCadvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoringamer-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace amer-backendamer-backendMongo exportermongo-exporter-*mongo metrics exporter---amer-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace amer-backendamer-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace amer-backendamer-backendSnowflake connectoramer-prod-mdm-connect-cluster-connect-*amer-qa-mdm-connect-cluster-connect-*amer-stage-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace amer-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-amer-prod-*monitoring-jdbc-snowflake-exporter-amer-stage-*monitoring-jdbc-snowflake-exporter-amer-stage-*Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoringamer-backendAkhqakhq-*Kafka UIlogsCertificates Wed Aug 31 21:57:19 CEST 2016 until: Sun Aug 31 22:07:17 CEST 2036ResourceCertificate LocationValid fromValid to Issued ToKibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/prod/namespaces/kong/config_files/certsThu, 13 Jan 2022 14:13:53 GMTTue, 10 Jan 2023 14:13:53 GMThttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/prod/namespaces/amer-backend/secrets.yaml.encryptedJan 18 11:07:55 2022 GMTJan 18 11:07:55 2024 GMTkafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094Setup and check connections:Snowflake - managing service accounts - via http://btondemand.COMPANY.com/ - Get Support → Submit ticket → GBL-ATP-COMMERCIAL SNOWFLAKE DOMAIN ADMI"
},
{
"title": "AMER PROD Services",
"pageID": "234698356",
"pageLink": "/display/GMDM/AMER+PROD+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-prodPing Federatehttps://prodfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-gw-amer-prodKafkakafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://gblmdmhubprodamrasp101478HUB UIhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ui-amer-prod/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://amerprod01.us-east-1.privatelink.snowflakecomputing.com/DB NameCOMM_AMER_MDM_DMART_PROD_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_AMER_MDM_DMART_PROD_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_prod&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_prod&var-topic=All&var-node=1Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_prodJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_prod&var-component=managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_prod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_prod&var-interval=$__auto_interval_intervalResource NameEndpointKibanahttps://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/ (PROD prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-prod/swagger-ui/index.html?configUrl=/api-gw-spec-amer-prod/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-prod/swagger-ui/index.html?configUrl=/api-batch-spec-amer-prod/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-amer-prod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-amer-prod-gbl-mdm-hub.COMPANY.com/ui/AKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-amer-prod-gbl-mdm-hub.COMPANY.com/Components & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsamer-prodManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableamer-prodBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsamer-prodApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsamer-prodSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsamer-prodEnrichermdmhub-entity-enricher-*Reltio events enricherlogsamer-prodCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsamer-prodPublishermdmhub-event-publisher-*Events publisherlogsamer-prodReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsClientsETL - COMPANY (GBLUS)MDM SystemsReltioPROD - Ys7joaPjhr9DwBJResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/361_Ys7joaPjhr9DwBJReltiohttps://361.reltio.com/ui/Ys7joaPjhr9DwBJhttps://361.reltio.com/reltio/api/Ys7joaPjhr9DwBJReltio Gateway Usersvc-pfe-mdmhub-prodRDMhttps://rdm.reltio.com/lookups/LEo5zuzyWyG1xg4Internal ResourcesResource NameEndpointMongomongodb://mongo-amer-prod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/Elasticsearchhttps://elastic-amer-prod-gbl-mdm-hub.COMPANY.com/"
},
{
"title": "GBL US PROD Services",
"pageID": "250133277",
"pageLink": "/display/GMDM/GBL+US+PROD+Services",
"content": "HUB EndpointsAPI & Kafka & S3Gateway API OAuth2 External - DEVhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-prodPing Federatehttps://prodfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-gw-gblus-prodKafkakafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://gblmdmhubprodamrasp101478Snowflake MDM DataMartDB Urlhttps://amerprod01.us-east-1.privatelink.snowflakecomputing.comDB NameCOMM_GBL_MDM_DMART_PRODDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_PROD_MDM_DMART_DEVOPS_ROLEHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gblus_prod&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gblus_prod&var-topic=All&var-node=1&var-instance=amraelp00007848.COMPANY.com:9102JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gblus_prod&var-component=manager&var-node=1&var-instance=amraelp00007848.COMPANY.com:9104Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_prod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_prod&var-interval=$__auto_interval_intervalKibanahttps://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/ (PROD prefixed dashboards)DocumentationManager API documentationhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-prod/swagger-ui/index.html?configUrl=/api-gw-spec-gblus-prod/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-prod/swagger-ui/index.html?configUrl=/api-batch-spec-gblus-prod/v3/api-docs/swagger-configAirflowAirflow UIhttps://airflow-amer-prod-gbl-mdm-hub.COMPANY.comConsulConsul UIhttps://consul-amer-prod-gbl-mdm-hub.COMPANY.com/ui/AKHQ - KafkaAKHQ Kafka UIhttps://akhq-amer-prod-gbl-mdm-hub.COMPANY.com/Components & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsgblus-prodManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availablegblus-prodBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsgblus-prodSubscribermdmhub-reltio-subscriber-*SQS Reltio events subscriberlogsgblus-prodEnrichermdmhub-entity-enricher-*Reltio events enricherlogsgblus-prodCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsgblus-prodPublishermdmhub-event-publisher-*Events publisherlogsgblus-prodReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsgblus-prodOnekey DCRmdmhub-mdm-onekey-dcr-service-*Onekey DCR servicelogsClientsCDW (GBLUS)ETL - COMPANY (GBLUS)ENGAGE (GBLUS)KOL_ONEVIEW (GBLUS)GRV (GBLUS)GRACE (GBLUS)MDM SystemsReltioPROD- 9kL30u7lFoDHp6XSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/361_9kL30u7lFoDHp6XReltiohttps://361.reltio.com/ui/9kL30u7lFoDHp6Xhttps://361.reltio.com/reltio/api/9kL30u7lFoDHp6XReltio Gateway Usersvc-pfe-mdmhub-prodRDMhttps://rdm.reltio.com/%s/DABr7gxyKKkrxD3Internal ResourcesMongomongodb://mongo-amer-prod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/Elasticsearchhttps://elastic-amer-prod-gbl-mdm-hub.COMPANY.com/"
},
{
"title": "AMER SANDBOX Cluster",
"pageID": "310950353",
"pageLink": "/display/GMDM/AMER+SANDBOX+Cluster",
"content": "Physical Architecture<schema>Kubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-sbx-amer●●●●●●●●●●●●●●●●●●●●●●● https://pdcs-som1d.COMPANY.comEKS over EC2us-east-1~60GB per nodeKong, Kafka, Mongo, Prometheus, MDMHUB microservicesoutbound and inboundSANDBOX - backend NamespaceComponentPod nameDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongamer-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsamer-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace amer-backendamer-backendZookeepermdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsamer-backendMongomongo-0Mongologsamer-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace amer-backendamer-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace amer-backendamer-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1elasticsearch-es-default-2EFK - elasticsearchkubectl logs {{pod name}} --namespace amer-backendmonitoringCadvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoringamer-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace amer-backendamer-backendMongo exportermongo-exporter-*mongo metrics exporter---amer-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace amer-backendamer-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace amer-backendamer-backendSnowflake connectoramer-devsbx-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace amer-backendamer-backendAkhqakhq-*Kafka UIlogsCertificates Wed Aug 31 21:57:19 CEST 2016 until: Sun Aug 31 22:07:17 CEST 2036ResourceCertificate LocationValid fromValid to Issued ToKibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/sandbox/namespaces/kong/config_files/certs2023-02-22 15:16:042025-02-21 15:16:04https://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/sandbox/namespaces/amer-backend/secrets.yaml.encrypted--kafka-amer-sandbox-gbl-mdm-hub.COMPANY.com:9094"
},
{
"title": "AMER DEVSBX Services",
"pageID": "310950591",
"pageLink": "/display/GMDM/AMER+DEVSBX+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/ext-api-gw-amer-devsbxPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/api-gw-amer-devsbxKafkahttp://kafka-amer-sandbox-gbl-mdm-hub.COMPANY.com/:9094MDM HUB S3 s3://gblmdmhubnprodamrasp100762HUB UIhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/ui-amer-devsbx/#/dashboardGrafana dashboardsResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=amer_devsbx&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=amer_devsbx&var-topic=All&var-node=11Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=amer_sandboxJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=amer_devsbx&var-component=managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=amer_sandbox&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=amer_devsbx&var-interval=$__auto_interval_intervalKibana dashboardsResource NameEndpointKibanahttps://kibana-amer-sandbox-gbl-mdm-hub.COMPANY.com (DEVSBX prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/api-gw-spec-amer-devsbx/swagger-ui/index.html?configUrl=/api-gw-spec-amer-devsbx/v3/api-docs/swagger-configBatch Service API documentationhttps://api-amer-sandbox-gbl-mdm-hub.COMPANY.com/api-batch-spec-amer-devsbx/swagger-ui/index.html?configUrl=/api-batch-spec-amer-devsbx/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-amer-sandbox-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-amer-sandbox-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-amer-sandbox-gbl-mdm-hub.COMPANY.comComponents & LogsENV (namespace)ComponentPods (* means part of name which changing)DescriptionLogsPod portsamer-devsbxManagermdmhub-mdm-manager-*Gateway APIlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableamer-devsbxBatch Servicemdmhub-batch-service-*Batch service, ETL batch loaderlogsamer-devsbxApi routermdmhub-mdm-api-router-*API gateway accross multiple tenatslogsamer-devsbxEnrichermdmhub-entity-enricher-*Reltio events enricherlogsamer-devsbxCallbackmdmhub-callback-service-*Events processor, callback, and pre-callback servicelogsamer-devsbxPublishermdmhub-event-publisher-*Events publisherlogsamer-devsbxReconciliationmdmhub-mdm-reconciliation-service-*Reconciliation serivcelogsInternal ResourcesResource NameEndpointMongomongodb://mongo-amer-sandbox-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-amer-sandbox-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-amer-sandbox-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-amer-sandbox-gbl-mdm-hub.COMPANY.com"
},
{
"title": "APAC",
"pageID": "228933517",
"pageLink": "/display/GMDM/APAC",
"content": ""
},
{
"title": "APAC Non PROD Cluster",
"pageID": "228933519",
"pageLink": "/display/GMDM/APAC+Non+PROD+Cluster",
"content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-nprod-apac●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●https://pdcs-apa1p.COMPANY.comEKS over EC2ap-southeast-1~60GB per node,6TBx2 replicated Portworx volumesKong, Kafka, Mongo, Prometheus, MDMHUB microservicesinbound/outboundComponents & LogsDEV - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsapac-devManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableapac-devBatch Servicemdmhub-batch-service-*Batch Servicelogsapac-devAPI routermdmhub-mdm-api-router-*API Routerlogsapac-devReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsapac-devEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsapac-devCallback Servicemdmhub-callback-service-*Callback Servicelogsapac-devEvent Publishermdmhub-event-publisher-*Event Publisherlogsapac-devReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogsapac-devCallback delay servicemdmhub-callback-delay-service-*Callback delay servicelogsQA - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsapac-qaManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableapac-qaBatch Servicemdmhub-batch-service-*Batch Servicelogsapac-qaAPI routermdmhub-mdm-api-router-*API Routerlogsapac-qaReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsapac-qaEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsapac-qaCallback Servicemdmhub-callback-service-*Callback Servicelogsapac-qaEvent Publishermdmhub-event-publisher-*Event Publisherlogsapac-qaReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogsapac-qaCallback delay servicemdmhub-callback-delay-service-*Callback delay servicelogsSTAGE - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsapac-stageManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableapac-stageBatch Servicemdmhub-batch-service-*Batch Servicelogsapac-stageAPI routermdmhub-mdm-api-router-*API Routerlogsapac-stageReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsapac-stageEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsapac-stageCallback Servicemdmhub-callback-service-*Callback Servicelogsapac-stageEvent Publishermdmhub-event-publisher-*Event Publisherlogsapac-stageReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogsapac-stageCallback delay servicemdmhub-callback-delay-service-*Callback delay servicelogsNon PROD - backend NamespaceComponentPodDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongapac-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsapac-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace apac-backendapac-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsapac-backendMongomongo-0Mongologsapac-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace apac-backendapac-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace apac-backendapac-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1EFK - elasticsearchkubectl logs {{pod name}} --namespace apac-backendapac-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace apac-backendmonitoringcAdvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoringapac-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace apac-backendapac-backendMongo exportermongo-exporter-*mongo metrics exporter---apac-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace apac-backendapac-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace apac-backendapac-backendSnowflake connectorapac-dev-mdm-connect-cluster-connect-*apac-qa-mdm-connect-cluster-connect-*apac-stage-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace apac-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-apac-dev-*monitoring-jdbc-snowflake-exporter-apac-stage-*monitoring-jdbc-snowflake-exporter-apac-stage-*Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoringapac-backendAKHQakhq-*Kafka UIlogsCertificates ResourceCertificate LocationValid fromValid to Issued ToKibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/apac/nprod/namespaces/kong/config_files/certs2022/03/042024/03/03https://api-apac-nprod-gbl-mdm-hub.COMPANY.comKafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/apac/nprod/namespaces/apac-backend/secrets.yaml.encrypted2022/03/072024/03/06https://kafka-api-nprod-gbl-mdm-hub.COMPANY.com:9094"
},
{
"title": "APAC DEV Services",
"pageID": "228933556",
"pageLink": "/display/GMDM/APAC+DEV+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-apac-devPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-apac-devKafkakafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://globalmdmnprodaspasp202202171347HUB UIhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ui-apac-dev/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_APAC_MDM_DMART_DEV_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_APAC_MDM_DMART_DEV_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=apac_dev&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=apac_dev&var-topic=All&var-node=1JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=apac_dev&var-component=managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=apac_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=apac_dev&var-interval=$__auto_interval_intervalKube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=apac-nprod&var-node=All&var-namespace=All&var-datasource=PrometheusPod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=apac_nprod&var-namespace=AllPVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/R_-8aaf7k/pvc-monitoring?orgId=1&refresh=30s&var-env=apac_nprodResource NameEndpointKibanahttps://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com (DEV prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-apac-dev/swagger-ui/index.html?configUrl=/api-gw-spec-apac-dev/v3/api-docs/swagger-configBatch Service API documentationhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-apac-dev/swagger-ui/index.html?configUrl=/api-batch-spec-apac-dev/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-apac-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-apac-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-apac-nprod-gbl-mdm-hub.COMPANY.comClientsMAPP (EMEA, AMER, APAC)GRACEMedicEASIEngageETL MDM SystemsReltio DEV - 2NBAwv1z2AvlkgSResource NameEndpointSQS queue namehttps://sqs.ap-southeast-1.amazonaws.com/930358522410/mpe-02_2NBAwv1z2AvlkgSReltiohttps://mpe-02.reltio.com/ui/2NBAwv1z2AvlkgShttps://mpe-02.reltio.com/reltio/api/2NBAwv1z2AvlkgSReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/GltqYa2x8xzSnB8Internal ResourcesResource NameEndpointMongomongodb://mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-apac-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com"
},
{
"title": "APAC QA Services",
"pageID": "234693067",
"pageLink": "/display/GMDM/APAC+QA+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-apac-qaPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-apac-qaKafkakafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://globalmdmnprodaspasp202202171347HUB UIhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ui-apac-qa/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_APAC_MDM_DMART_QA_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_APAC_MDM_DMART_QA_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=apac_qa&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=apac_qa&var-topic=All&var-node=1JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=apac_qa&var-component=managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=apac_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=apac_qa&var-interval=$__auto_interval_intervalKube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=apac-nprod&var-node=All&var-namespace=All&var-datasource=PrometheusPod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=apac_nprod&var-namespace=AllPVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/R_-8aaf7k/pvc-monitoring?orgId=1&refresh=30s&var-env=apac_nprodResource NameEndpointKibanahttps://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com (QA prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-apac-qa/swagger-ui/index.html?configUrl=/api-gw-spec-apac-qa/v3/api-docs/swagger-configBatch Service API documentationhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-apac-qa/swagger-ui/index.html?configUrl=/api-batch-spec-apac-qa/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-apac-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-apac-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-apac-nprod-gbl-mdm-hub.COMPANY.comClientsMAPP (EMEA, AMER, APAC)GRACEMedicEASIEngageETL MDM SystemsReltio QA - xs4oRCXpCKewNDKResource NameEndpointSQS queue namehttps://sqs.ap-southeast-1.amazonaws.com/930358522410/mpe-02_xs4oRCXpCKewNDKReltiohttps://mpe-02.reltio.com/ui/xs4oRCXpCKewNDKhttps://mpe-02.reltio.com/reltio/api/xs4oRCXpCKewNDKReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/jemrjLkPUhOsPMaInternal ResourcesResource NameEndpointMongomongodb://mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-apac-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com"
},
{
"title": "APAC STAGE Services",
"pageID": "234693073",
"pageLink": "/display/GMDM/APAC+STAGE+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-apac-stagePing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-apac-stageKafkakafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://globalmdmnprodaspasp202202171347HUB UIhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/ui-apac-stage/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_APAC_MDM_DMART_STG_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_APAC_MDM_DMART_STG_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=apac_stage&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=apac_stage&var-topic=All&var-node=1JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=apac_stage&var-component=managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=apac_nprod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=apac_stage&var-interval=$__auto_interval_intervalKube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=apac-nprod&var-node=All&var-namespace=All&var-datasource=PrometheusPod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=apac_nprod&var-namespace=AllPVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/R_-8aaf7k/pvc-monitoring?orgId=1&refresh=30s&var-env=apac_nprodResource NameEndpointKibanahttps://kibana-apac-nprod-gbl-mdm-hub.COMPANY.com (STAGE prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-apac-stage/swagger-ui/index.html?configUrl=/api-gw-spec-apac-stage/v3/api-docs/swagger-configBatch Service API documentationhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-apac-stage/swagger-ui/index.html?configUrl=/api-batch-spec-apac-stage/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-apac-nprod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-apac-nprod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-apac-nprod-gbl-mdm-hub.COMPANY.comClientsMAPP (EMEA, AMER, APAC)GRACEMedicEASIEngageETL MDM SystemsReltio STAGE - Y4StMNK3b0AGDf6Resource NameEndpointSQS queue namehttps://sqs.ap-southeast-1.amazonaws.com/930358522410/mpe-02_Y4StMNK3b0AGDf6Reltiohttps://mpe-02.reltio.com/ui/Y4StMNK3b0AGDf6https://mpe-02.reltio.com/reltio/api/Y4StMNK3b0AGDf6Reltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/NYa4AETF73napDaInternal ResourcesResource NameEndpointMongomongodb://mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-apac-nprod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com"
},
{
"title": "APAC PROD Cluster",
"pageID": "234712170",
"pageLink": "/display/GMDM/APAC+PROD+Cluster",
"content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-prod-apac●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●https://pdcs-apa1p.COMPANY.comEKS over EC2ap-southeast-1~60GB per node,6TBx2 replicated Portworx volumesKong, Kafka, Mongo, Prometheus, MDMHUB microservicesinbound/outboundComponents & LogsPROD - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsapac-prodManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableapac-prodBatch Servicemdmhub-batch-service-*Batch Servicelogsapac-prodAPI routermdmhub-mdm-api-router-*API Routerlogsapac-prodReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsapac-prodEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsapac-prodCallback Servicemdmhub-callback-service-*Callback Servicelogsapac-prodEvent Publishermdmhub-event-publisher-*Event Publisherlogsapac-prodReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogsapac-prodCallback delay servicemdmhub-callback-delay-service-*Callback delay servicelogsNon PROD - backend NamespaceComponentPodDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongapac-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsapac-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace apac-backendapac-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsapac-backendMongomongo-0Mongologsapac-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace apac-backendapac-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace apac-backendapac-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1EFK - elasticsearchkubectl logs {{pod name}} --namespace apac-backendapac-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace apac-backendmonitoringcAdvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoringapac-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace apac-backendapac-backendMongo exportermongo-exporter-*mongo metrics exporter---apac-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace apac-backendapac-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace apac-backendapac-backendSnowflake connectorapac-prod-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace apac-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-apac-prod-*Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoringapac-backendAKHQakhq-*Kafka UIlogsCertificates ResourceCertificate LocationValid fromValid to Issued ToKibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/apac/prod/namespaces/kong/config_files/certs2022/03/042024/03/03https://api-apac-prod-gbl-mdm-hub.COMPANY.comKafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/apac/prod/namespaces/apac-backend/secrets.yaml.encrypted2022/03/072024/03/06https://kafka-api-prod-gbl-mdm-hub.COMPANY.com:9094"
},
{
"title": "APAC PROD Services",
"pageID": "234712172",
"pageLink": "/display/GMDM/APAC+PROD+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-apac-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-apac-prodPing Federatehttps://prodfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-gw-apac-prodKafkakafka-apac-prod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://globalmdmprodaspasp202202171415HUB UIhttps://api-apac-prod-gbl-mdm-hub.COMPANY.com/ui-apac-prod/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlemeaprod01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_APAC_MDM_DMART_PROD_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_APAC_MDM_DMART_PROD_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=prod_dev&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=apac_prod&var-topic=All&var-node=1JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=apac_prod&var-component=mdm_managerKonghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=apac_prod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=apac_prod&var-interval=$__auto_interval_intervalKube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-prod-apac&var-node=All&var-namespace=All&var-datasource=PrometheusPod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=apac_prod&var-namespace=AllPVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/R_-8aaf7k/pvc-monitoring?orgId=1&refresh=30s&var-env=apac_prodResource NameEndpointKibanahttps://kibana-apac-prod-gbl-mdm-hub.COMPANY.com (DEV prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-apac-prod/swagger-ui/index.html?configUrl=/api-gw-spec-apac-prod/v3/api-docs/swagger-configBatch Service API documentationhttps://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-apac-prod/swagger-ui/index.html?configUrl=/api-batch-spec-apac-prod/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-apac-prod-gbl-mdm-hub.COMPANY.comConsulResource NameEndpointConsul UIhttps://consul-apac-prod-gbl-mdm-hub.COMPANY.comAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-apac-prod-gbl-mdm-hub.COMPANY.comClientsMAPP (EMEA, AMER, APAC)GRACEMedicEASIEngageETL MDM SystemsReltio DEV - 2NBAwv1z2AvlkgSResource NameEndpointSQS queue namehttps://sqs.ap-southeast-1.amazonaws.com/930358522410/ap-360_sew6PfkTtSZhLdWReltiohttps://ap-360.reltio.com/ui/sew6PfkTtSZhLdWhttps://ap-360.reltio.com/reltio/api/sew6PfkTtSZhLdWReltio Gateway Usersvc-pfe-mdmhub-prodRDMhttps://rdm.reltio.com/lookups/ARTA9lOg3dbvDqkInternal ResourcesResource NameEndpointMongomongodb://mongo-apac-prod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-apac-prod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-apac-prod-gbl-mdm-hub.COMPANY.comElasticsearchhttps://elastic-apac-prod-gbl-mdm-hub.COMPANY.com"
},
{
"title": "EMEA",
"pageID": "181022903",
"pageLink": "/display/GMDM/EMEA",
"content": ""
},
{
"title": "EMEA External proxy",
"pageID": "308256760",
"pageLink": "/display/GMDM/EMEA+External+proxy",
"content": "The page describes the Kong external proxy servers. deployed in a DLP (Double Lollipop) AWS account, used by clients outside of the COMPANY network, to access MDM Hub.Kong proxy instancesEnvironmentConsole addressInstanceSSH accessresource typeAWS regionAWS Account IDComponentsNon PRODhttp://awsprodv2.COMPANY.com/and use the role:WBS-EUW1-GBICC-ALLENV-RO-SSOi-08d4b21c314a98700 (EUW1Z2DL115)ssh ec2-user@euw1z2dl115.COMPANY.comEC2eu-west-1432817204314KongPRODi-091aa7f1fe1ede714 (EUW1Z2DL113)ssh ec2-user@euw1z2dl113.COMPANY.comi-05c4532bf7b8d7511 (EUW1Z2DL114)ssh ec2-user@euw1z2dl114.COMPANY.com External Hub EndpointsEnvironmentServiceEndpointInbound security group configurationNon PRODAPIhttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/MDMHub-kafka-and-api-proxy-external-nprod-sgKafkakafka-b1-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com:9095kafka-b2-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com:9095kafka-b3-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com:9095PRODAPIhttps://api-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com/MDMHub-kafka-and-api-proxy-external-prod-sg - due to the limit of 60 rules per SG, add new ones to:MDMHub-kafka-and-api-proxy-external-prod-sg-2Kafkakafka-b1-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com:9095kafka-b2-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com:9095kafka-b3-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com:9095ClientsEnvironmentClientsNon PRODFind all details in the Security GroupMDMHub-kafka-and-api-proxy-external-nprod-sgPRODFind all details in the Security GroupMDMHub-kafka-and-api-proxy-external-prod-sgAnsible configurationResourceAddressInstall Kong proxyhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/install_kong.ymlInstall cadvisorhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/install_cadvisor.ymlNon PROD inventoryhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/proxy_nprodPROD inventoryhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/proxy_prodUseful SOPsHow to access AWS ConsoleHow to restart the EC2 instanceHow to login to hosts with SSHNo downtime Kong restart/upgrade"
},
{
"title": "EMEA Non PROD Cluster",
"pageID": "181022904",
"pageLink": "/display/GMDM/EMEA+Non+PROD+Cluster",
"content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-nprod-emea10.90.96.0/2310.90.98.0/23https://pdcs-ema1p.COMPANY.com/EKS over EC2eu-west-1~100GBper node,7.3Ti x2 replicated Portworx volumesKong, Kafka, Mongo, Prometheus, MDMHUB microservicesinbound/outboundComponents & LogsDEV - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsemea-devManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableemea-devBatch Servicemdmhub-batch-service-*Batch Servicelogsemea-devAPI routermdmhub-mdm-api-router-*API Routerlogsemea-devReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsemea-devEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsemea-devCallback Servicemdmhub-callback-service-*Callback Servicelogsemea-devEvent Publishermdmhub-event-publisher-*Event Publisherlogsemea-devReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation ServicelogsQA - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsemea-qaManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableemea-qaBatch Servicemdmhub-batch-service-*Batch Servicelogsemea-qaAPI routermdmhub-mdm-api-router-*API Routerlogsemea-qaReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsemea-qaEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsemea-qaCallback Servicemdmhub-callback-service-*Callback Servicelogsemea-qaEvent Publishermdmhub-event-publisher-*Event Publisherlogsemea-qaReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation ServicelogsSTAGE - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsemea-stageManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableemea-stageBatch Servicemdmhub-batch-service-*Batch Servicelogsemea-stageAPI routermdmhub-mdm-api-router-*API Routerlogsemea-stageReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsemea-stageEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsemea-stageCallback Servicemdmhub-callback-service-*Callback Servicelogsemea-stageEvent Publishermdmhub-event-publisher-*Event Publisherlogsemea-stageReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation ServicelogsGBL DEV - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsgbl-devManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availablegbl-devBatch Servicemdmhub-batch-service-*Batch Servicelogsgbl-devReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsgbl-devEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsgbl-devCallback Servicemdmhub-callback-service-*Callback Servicelogsgbl-devEvent Publishermdmhub-event-publisher-*Event Publisherlogsgbl-devReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogsgbl-devDCR Servicemdmhub-mdm-dcr-service-*DCR Servicelogsgbl-devMAP Channel mdmhub-mdm-map-channel-*MAP Channellogsgbl-devPforceRX Channelmdm-pforcerx-channel-*PforceRX ChannellogsGBL QA - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsgbl-qaManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availablegbl-qaBatch Servicemdmhub-batch-service-*Batch Servicelogsgbl-qaReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsgbl-qaEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsgbl-qaCallback Servicemdmhub-callback-service-*Callback Servicelogsgbl-qaEvent Publishermdmhub-event-publisher-*Event Publisherlogsgbl-qaReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogsgbl-qaDCR Servicemdmhub-mdm-dcr-service-*DCR Servicelogsgbl-qaMAP Channel mdmhub-mdm-map-channel-*MAP Channellogsgbl-qaPforceRX Channelmdm-pforcerx-channel-*PforceRX ChannellogsGBL STAGE - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsgbl-stageManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availablegbl-stageBatch Servicemdmhub-batch-service-*Batch Servicelogsgbl-stageReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsgbl-stageEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsgbl-stageCallback Servicemdmhub-callback-service-*Callback Servicelogsgbl-stageEvent Publishermdmhub-event-publisher-*Event Publisherlogsgbl-stageReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation Servicelogsgbl-stageDCR Servicemdmhub-mdm-dcr-service-*DCR Servicelogsgbl-stageMAP Channel mdmhub-mdm-map-channel-*MAP Channellogsgbl-stagePforceRX Channelmdm-pforcerx-channel-*PforceRX ChannellogsNon PROD - backend NamespaceComponentPodDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongemea-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsemea-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace emea-backendemea-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsemea-backendMongomongo-0Mongologsemea-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace emea-backendemea-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace emea-backendemea-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1EFK - elasticsearchkubectl logs {{pod name}} --namespace emea-backendemea-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace emea-backendmonitoringcAdvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoringemea-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace emea-backendemea-backendMongo exportermongo-exporter-*mongo metrics exporter---emea-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace emea-backendemea-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace emea-backendemea-backendSnowflake connectoremea-dev-mdm-connect-cluster-connect-*emea-qa-mdm-connect-cluster-connect-*emea-stage-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace emea-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-emea-dev-*monitoring-jdbc-snowflake-exporter-emea-stage-*monitoring-jdbc-snowflake-exporter-emea-stage-*Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoringemea-backendAKHQakhq-*Kafka UIlogsCertificates ResourceCertificate LocationValid fromValid to Issued ToKibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/namespaces/kong/config_files/certs2022/03/042024/03/03https://api-emea-nprod-gbl-mdm-hub.COMPANY.comKafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/namespaces/emea-backend2022/03/072024/03/06kafka-emea-nprod-gbl-mdm-hub.COMPANY.com"
},
{
"title": "EMEA DEV Services",
"pageID": "181022906",
"pageLink": "/display/GMDM/EMEA+DEV+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-devPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-emea-devKafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://pfe-atp-eu-w1-nprod-mdmhub/emea/devHUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-emea-dev/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_EMEA_MDM_DMART_DEV_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_EMEA_MDM_DMART_DEVOPS_DEV_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=emea_dev&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=emea_dev&var-kube_env=emea_nprod&var-topic=All&var-instance=All&var-node=JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_dev&var-component=mdm_manager&var-instance=All&var-node=Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_intervalKube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-nprod-emea&var-node=All&var-namespace=All&var-datasource=PrometheusPod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=emea_nprod&var-namespace=AllPVC Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/xLgt8oTik/portworx-cluster-monitoring?orgId=1&var-cluster=atp-mdmhub-nprod-emea&var-node=AllResource NameEndpointKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/ (DEV prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-dev/swagger-ui/index.html?configUrl=/api-gw-spec-emea-dev/v3/api-docs/swagger-configBatch Service API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-dev/swagger-ui/index.html?configUrl=/api-batch-spec-emea-dev/v3/api-docs/swagger-configDCR Service 2 API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-dcr-spec-emea-dev/swagger-ui/index.html?configUrl=/api-dcr-spec-emea-dev/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/ConsulResource NameEndpointConsul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/AKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/ClientsETL - COMPANY (GBLUS)MDM SystemsReltioDEV - wn60kG248ziQSMWResource NameEndpointSQS queue namehttps://eu-west-1.queue.amazonaws.com/930358522410/mpe-01_wn60kG248ziQSMWReltiohttps://mpe-01.reltio.com/ui/wn60kG248ziQSMWhttps://mpe-01.reltio.com/reltio/api/wn60kG248ziQSMWReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/rQHwiWkdYGZRTNqInternal ResourcesResource NameEndpointMongomongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkahttp://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSLKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/Elasticsearchhttps://elastic-emea-nprod-gbl-mdm-hub.COMPANY.com/"
},
{
"title": "EMEA QA Services",
"pageID": "192383454",
"pageLink": "/display/GMDM/EMEA+QA+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-qaPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-emea-qaKafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://pfe-atp-eu-w1-nprod-mdmhub/emea/qaHUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-emea-qa/#/dashboardSnowflake MDM DataMeResource NameEndpointDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_EMEA_MDM_DMART_QA_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_EMEA_MDM_DMART_QA_DEVOPS_ROLEGrafana dashboardsResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=emea_qa&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=emea_qa&var-topic=All&var-node=1&var-instance=euw1z2dl112.COMPANY.com:9102Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-env=emea_nprod&var-job=node-exporter&var-node=10.90.129.220&var-port=9100Pod monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&var-env=emea_nprod&var-namespace=AllJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_qa&var-component=batch_service&var-instance=All&var-node=Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_intervalKibana dashboardsResource NameEndpointKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home (QA prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-qa/swagger-ui/index.htmlBatch Service API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-qa/swagger-ui/index.htmlAirflowResource NameEndpointAirflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/login/?next=https%3A%2F%2Fairflow-emea-nprod-gbl-mdm-hub.COMPANY.com%2FhomeConsulResource NameEndpointConsul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/AKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/loginClientsETL - COMPANY (GBLUS)MDM SystemsReltioQA - vke5zyYwTifyeJSResource NameEndpointSQS queue namehttps://eu-west-1.queue.amazonaws.com/930358522410/mpe-01_vke5zyYwTifyeJSReltiohttps://mpe-01.reltio.com/ui/vke5zyYwTifyeJShttps://mpe-01.reltio.com/reltio/api/vke5zyYwTifyeJSReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/jIqfd8krU6ua5kRInternal ResourcesResource NameEndpointMongomongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkahttp://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSLKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/homeElasticsearchhttps://elastic-emea-nprod-gbl-mdm-hub.COMPANY.com/"
},
{
"title": "EMEA STAGE Services",
"pageID": "192383457",
"pageLink": "/display/GMDM/EMEA+STAGE+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-stagePing Federatehttps://stgfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-emea-stageKafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://pfe-atp-eu-w1-nprod-mdmhub/emea/stageHUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-emea-stage/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_EMEA_MDM_DMART_STG_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_EMEA_MDM_DMART_STG_DEVOPS_ROLEResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=emea_stage&var-component=mdm_manager&var-component_publisher=event_publisher&var-component_subscriber=reltio_subscriber&var-instance=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=emea_stage&var-kube_env=amer_nprod&var-topic=All&var-instance=All&var-node=Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-env=emea_nprod&var-job=node-exporter&var-node=10.90.129.220&var-port=9100Pod monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&var-env=emea_nprod&var-namespace=AllJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_stage&var-component=batch_service&var-instance=All&var-node=Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_intervalResource NameEndpointKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home (STAGE prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-stage/swagger-ui/index.html?configUrl=/api-gw-spec-emea-stage/v3/api-docs/swagger-configBatch Service API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-stage/swagger-ui/index.html?configUrl=/api-batch-spec-emea-stage/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/login/?next=https%3A%2F%2Fairflow-emea-nprod-gbl-mdm-hub.COMPANY.com%2FhomeConsulResource NameEndpointConsul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/AKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/loginClientsETL - COMPANY (GBLUS)MDM SystemsReltioSTAGE - Dzueqzlld107BVWResource NameEndpointSQS queue namehttps://eu-west-1.queue.amazonaws.com/930358522410/mpe-01_Dzueqzlld107BVWReltiohttps://mpe-01.reltio.com/ui/Dzueqzlld107BVWhttps://mpe-01.reltio.com/reltio/api/Dzueqzlld107BVWReltio Gateway Usersvc-pfe-mdmhubRDMhttps://rdm.reltio.com/lookups/TBxXCy2Z6LZ8nbnInternal ResourcesResource NameEndpointMongomongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkahttp://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSLKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/homeElasticsearchhttps://elastic-emea-nprod-gbl-mdm-hub.COMPANY.com/"
},
{
"title": "GBL DEV Services",
"pageID": "250130206",
"pageLink": "/display/GMDM/GBL+DEV+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-devPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gbl-devKafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://pfe-atp-eu-w1-nprod-mdmhub (eu-west-1)HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-gbl-dev/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_EU_MDM_DMART_DEV_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_DEV_MDM_DMART_DEVOPS_ROLEMonitoringResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gbl_dev&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gbl_dev&var-topic=All&var-node=1&var-instance=10.192.70.189:9102Pod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10sKube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-nprod-emea&var-node=All&var-namespace=All&var-datasource=PrometheusJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gbl_dev&var-component=batch_service&var-instance=All&var-node=Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_intervalLogsResource NameEndpointKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home (DEV prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gbl-dev/swagger-ui/index.htmlAirflowResource NameEndpointAirflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/ConsulResource NameEndpointConsul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/AKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/ClientsChinaMAPPKOL_ONEVIEWGRVGANTGRACEMedicPTRSOneMedEngageMDM SystemsReltio GBL DEV - FLy4mo0XAh0YEbNResource NameEndpointSQS queue namehttps://sqs.eu-west-1.amazonaws.com/930358522410/mpe-01_FLy4mo0XAh0YEbNReltiohttps://eu-dev.reltio.com/ui/FLy4mo0XAh0YEbNhttps://eu-dev.reltio.com/reltio/api/FLy4mo0XAh0YEbNReltio Gateway UserIntegration_Gateway_UserRDMhttps://rdm.reltio.com/%s/WUBsSEwz3SU3idO/Internal ResourcesResource NameEndpointMongomongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkahttp://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSLKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home#/Elasticsearchhttps://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com"
},
{
"title": "GBL QA Services",
"pageID": "250130235",
"pageLink": "/display/GMDM/GBL+QA+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIGateway API OAuth2 External - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-qaPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gbl-qaKafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://pfe-atp-eu-w1-nprod-mdmhub (eu-west-1)HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-gbl-qa/#/dashboardSnowflake MDM DataMartDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_EU_MDM_DMART_QA_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_QA_MDM_DMART_DEVOPS_ROLEMonitoringHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gbl_qa&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gbl_qa&var-kube_env=emea_nprod&var-topic=All&var-instance=All&var-node=Pod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=emea_nprod&var-namespace=AllKube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-nprod-emea&var-node=All&var-namespace=All&var-datasource=PrometheusJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gbl_qa&var-component=batch_service&var-instance=All&var-node=Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=gbl_dev&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_nprod&var-instance=10.90.130.202:9216&var-node_instance=10.90.129.220&var-interval=$__auto_interval_intervalLogsKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home(QA prefixed dashboards)DocumentationManager API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gbl-qa/swagger-ui/index.htmlAirflowResource NameEndpointAirflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/ConsulResource NameEndpointConsul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/AKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/ClientsChinaMAPPKOL_ONEVIEWGRVGANTGRACEMedicPTRSOneMedEngageMDM SystemsReltio GBL MAPP - AwFwKWinxbarC0ZSQS queue namehttps://sqs.eu-west-1.amazonaws.com/930358522410/mpe-01_AwFwKWinxbarC0ZReltiohttps://mpe-01.reltio.com/ui/AwFwKWinxbarC0Z/https://mpe-01.reltio.com/reltio/api/AwFwKWinxbarC0Z/Reltio Gateway UserIntegration_Gateway_UserRDMhttps://rdm.reltio.com/%s/WUBsSEwz3SU3idO/Internal ResourcesMongomongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkakafka-emea-nprod-gbl-mdm-hub.COMPANY.com:9094 SASL SSLKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home#/Elasticsearchhttps://elastic-emea-nprod-gbl-mdm-hub.COMPANY.com"
},
{
"title": "GBL STAGE Services",
"pageID": "250130297",
"pageLink": "/display/GMDM/GBL+STAGE+Services",
"content": "HUB EndpointsAPI & Kafka & S3Gateway API OAuth2 External - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-stagePing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-gbl-stageKafkakafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://pfe-atp-eu-w1-nprod-mdmhub (eu-west-1)HUB UIhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-gbl-stage/#/dashboardSnowflake MDM DataMartDB Urlhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB NameCOMM_EU_MDM_DMART_STG_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_STG_MDM_DMART_DEVOPS_ROLEMonitoringHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gbl_stage&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gbl_stage&var-kube_env=emea_nprod&var-topic=All&var-instance=All&var-node=Pod Monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/AAOMjeHmk/pod-monitoring?orgId=1&refresh=10s&var-env=emea_nprod&var-namespace=AllKube Statehttps://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&var-cluster=atp-mdmhub-nprod-emea&var-node=All&var-namespace=All&var-datasource=PrometheusJMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=gbl_stage&var-component=batch_service&var-instance=All&var-node=Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_nprod&var-service=All&var-instance=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=gbl_stage&var-instance=&var-node_instance=&var-interval=$__auto_interval_intervalLogsKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home(STAGE prefixed dashboards)DocumentationManager API documentationhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gbl-stage/swagger-ui/index.htmlAirflowResource NameEndpointAirflow UIhttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/ConsulResource NameEndpointConsul UIhttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/AKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/ClientsChinaMAPPKOL_ONEVIEWGRVGANTGRACEMedicPTRSOneMedEngageMDM SystemsReltio GBL STAGE - FW4YTaNQTJEcN2gSQS queue namehttps://sqs.eu-west-1.amazonaws.com/930358522410/mpe-01_FW4YTaNQTJEcN2gReltiohttps://eu-dev.reltio.com/ui/FW4YTaNQTJEcN2g/https://eu-dev.reltio.com/reltio/api/FW4YTaNQTJEcN2g/Reltio Gateway UserIntegration_Gateway_UserRDMhttps://rdm.reltio.com/%s/WUBsSEwz3SU3idO/Internal ResourcesMongomongodb://mongo-emea-nprod-gbl-mdm-hub.COMPANY.com:27017Kafkahttp://kafka-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094/ - SASL SSLKibanahttps://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home#/Elasticsearchhttps://elastic-apac-nprod-gbl-mdm-hub.COMPANY.com"
},
{
"title": "EMEA PROD Cluster",
"pageID": "196881569",
"pageLink": "/display/GMDM/EMEA+PROD+Cluster",
"content": "Physical ArchitectureKubernetes clusternameIPConsole addressresource typeAWS regionFilesystemComponentsTypeatp-mdmhub-nprod-emea10.90.96.0/2310.90.98.0/23https://pdcs-ema1p.COMPANY.com/EKS over EC2eu-west-1~100GBper node,7.3Ti x2 replicated Portworx volumesKong, Kafka, Mongo, Prometheus, MDMHUB microservicesinbound/outboundComponents & LogsPROD - microservicesENV (namespace)ComponentPodDescriptionLogsPod portsemea-prodManagermdmhub-mdm-manager-*Managerlogs8081 - application API,8000 - if remote debugging is enabled you are able to use this to debug app in environment,9000 - Prometheus exporter,8888 - spring boot actuator,8080 - serves swagger API definition - if availableemea-prodBatch Servicemdmhub-batch-service-*Batch Servicelogsemea-prodAPI routermdmhub-mdm-api-router-*API Routerlogsemea-prodReltio Subscribermdmhub-reltio-subscriber-*Reltio Subscriberlogsemea-prodEntity Enrichermdmhub-entity-enricher-*Entity Enricherlogsemea-prodCallback Servicemdmhub-callback-service-*Callback Servicelogsemea-prodEvent Publishermdmhub-event-publisher-*Event Publisherlogsemea-prodReconciliation Servicemdmhub-mdm-reconciliation-service-*Reconciliation ServicelogsPROD - backend NamespaceComponentPodDescriptionLogskongKongmdmhub-kong-kong-*API managerkubectl logs {{pod name}} --namespace kongemea-backendKafkamdm-kafka-kafka-0mdm-kafka-kafka-1mdm-kafka-kafka-2Kafkalogsemea-backendKafka Exportermdm-kafka-kafka-exporter-*Kafka Monitoring - Prometheuskubectl logs {{pod name}} --namespace emea-backendemea-backendZookeeper mdm-kafka-zookeeper-0mdm-kafka-zookeeper-1mdm-kafka-zookeeper-2Zookeeperlogsemea-backendMongomongo-0mongo-1mongo-2Mongologsemea-backendKibanakibana-kb-*EFK - kibanakubectl logs {{pod name}} --namespace emea-backendemea-backendFluentDfluentd-*EFK - fluentdkubectl logs {{pod name}} --namespace emea-backendemea-backendElasticsearchelasticsearch-es-default-0elasticsearch-es-default-1elasticsearch-es-default-2EFK - elasticsearchkubectl logs {{pod name}} --namespace emea-backendemea-backendSQS ExporterTODOSQS Reltio exporterkubectl logs {{pod name}} --namespace emea-backendmonitoringcAdvisormonitoring-cadvisor-*Docker Monitoring - Prometheuskubectl logs {{pod name}} --namespace monitoringemea-backendMongo Connectormonstache-*EFK - mongo → elasticsearch exporterkubectl logs {{pod name}} --namespace emea-backendemea-backendMongo exportermongo-exporter-*mongo metrics exporter---emea-backendGit2Consulgit2consul-*GIT to Consul loaderkubectl logs {{pod name}} --namespace emea-backendemea-backendConsulconsul-consul-server-0consul-consul-server-1consul-consul-server-2Consulkubectl logs {{pod name}} --namespace emea-backendemea-backendSnowflake connectoremea-prod-mdm-connect-cluster-connect-*Snowflake Kafka Connectorkubectl logs {{pod name}} --namespace emea-backendmonitoringKafka Connect Exportermonitoring-jdbc-snowflake-exporter-emea-prod-*Kafka Connect metric exporterkubectl logs {{pod name}} --namespace monitoringemea-backendAKHQakhq-*Kafka UIlogsCertificates ResourceCertificate LocationValid fromValid to Issued ToKibana, Elasticsearch, Kong, Airflow, Consul, Prometheus,http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/prod/namespaces/kong/config_files/certs2022/03/042024/03/03https://api-emea-prod-gbl-mdm-hub.COMPANY.com/Kafkahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/prod/namespaces/emea-backend2022/03/072024/03/06https://kafka-emea-prod-gbl-mdm-hub.COMPANY.com/"
},
{
"title": "EMEA PROD Services",
"pageID": "196881867",
"pageLink": "/display/GMDM/EMEA+PROD+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIResource NameEndpointGateway API OAuth2 External - PRODhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-prodPing Federatehttps://prodfederate.COMPANY.com/as/token.oauth2Gateway API KEY auth - PRODhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-emea-prodKafkakafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://pfe-atp-eu-w1-prod-mdmhub/emea/prodHUB UIhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ui-emea-prod/#/dashboardSnowflake MDM DataMartResource NameEndpointDB Urlhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/DB NameCOMM_EMEA_MDM_DMART_PROD_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_EMEA_MDM_DMART_PROD_DEVOPS_ROLEMonitoringResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=emea_prod&var-node=All&var-type=entitiesHUB Batch Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/gz0X6rkMk/hub-batch-performance?orgId=1&refresh=10s&var-env=emea_prod&var-node=All&var-name=AllKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=emea_prod&var-topic=All&var-node=5&var-instance=euw1z1pl117.COMPANY.com:9102Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-env=emea_prod&var-job=node_exporter&var-node=euw1z2pl113.COMPANY.com&var-port=9100Docker monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=emea_prod&var-node=1JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_prod&var-component=manager&var-node=5&var-instance=euw1z1pl117.COMPANY.com:9104Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_prod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_prod&var-instance=euw1z2pl115.COMPANY.com:9120&var-node_instance=euw1z2pl115.COMPANY.com&var-interval=$__auto_interval_intervalLogsResource NameEndpointKibanahttps://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/ (PROD prefixed dashboards)DocumentationResource NameEndpointManager API documentationhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-prod/swagger-ui/index.html?configUrl=/api-gw-spec-emea-prod/v3/api-docs/swagger-configBatch Service API documentationhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-prod/swagger-ui/index.html?configUrl=/api-batch-spec-emea-prod/v3/api-docs/swagger-configAirflowResource NameEndpointAirflow UIhttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/homeConsulResource NameEndpointConsul UIhttps://consul-emea-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/servicesAKHQ - KafkaResource NameEndpointAKHQ Kafka UIhttps://akhq-emea-prod-gbl-mdm-hub.COMPANY.com/loginClientsETL - COMPANY (GBLUS)MDM SystemsReltioPROD_EMEA - Xy67R0nDA10RUV6Resource NameEndpointSQS queue namehttps://sqs.eu-west-1.amazonaws.com/930358522410/eu-360_Xy67R0nDA10RUV6Reltiohttps://eu-360.reltio.com/reltio/api/Xy67R0nDA10RUV6 - APIhttps://eu-360.reltio.com/ui/Xy67R0nDA10RUV6/# - UIReltio Gateway Usersvc-pfe-mdmhub-prodRDMhttps://rdm.reltio.com/%s/uJG2vepGEXEHmrI/Internal ResourcesResource NameEndpointMongohttps://mongo-emea-prod-gbl-mdm-hub.COMPANY.com:27017Kafkahttp://kafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/,http://kafka-b2-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/,http://kafka-b3-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/Kibanahttps://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/Elasticsearchhttps://elastic-emea-prod-gbl-mdm-hub.COMPANY.com/"
},
{
"title": "GBL PROD Services",
"pageID": "284792395",
"pageLink": "/display/GMDM/GBL+PROD+Services",
"content": "HUB EndpointsAPI & Kafka & S3 & UIGateway API OAuth2 External - PRODhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-gbl-prodPing Federatehttps://prodfederate.COMPANY.com/as/token.oauth2Gateway API KEY auth - PRODhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gbl-prodKafkakafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094MDM HUB S3 s3://pfe-baiaes-eu-w1-project/mdmHUB UIhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ui-gbl-prod/#/dashboardSnowflake MDM DataMartDB Urlhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/DB NameCOMM_EU_MDM_DMART_PROD_DBDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_GBL_MDM_DMART_PROD_DEVOPS_ROLEMonitoringHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=gbl_prod&var-component=mdm_manager&var-component_publisher=event_publisher&var-component_subscriber=reltio_subscriber&var-instance=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=gbl_prod&var-kube_env=emea_prod&var-topic=All&var-instance=All&var-node=Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=emea_prod&var-node=&var-instance=10.90.130.122Pods monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=emea_prod&var-node=&var-instance=10.90.130.122JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=emea_prod&var-component=manager&var-node=5&var-instance=euw1z1pl117.COMPANY.com:9104Konghttps://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kong?orgId=1&refresh=5s&var-env=emea_prod&var-service=All&var-node=AllMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=emea_prod&var-instance=10.90.142.48:9216&var-node_instance=euw1z2pl115.COMPANY.com&var-interval=$__auto_interval_intervalLogsKibanahttps://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/ (PROD prefixed dashboards)DocumentationManager API documentationhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-gbl-prod/swagger-ui/index.html?configUrl=/api-gw-spec-emea-prod/v3/api-docs/swagger-configAirflowAirflow UIhttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/homeConsulConsul UIhttps://consul-emea-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/servicesAKHQ - KafkaAKHQ Kafka UIhttps://akhq-emea-prod-gbl-mdm-hub.COMPANY.com/loginClientsETL - COMPANY (GBLUS)MDM SystemsReltioPROD_EMEA - FW2ZTF8K3JpdfFlSQS queue namehttps://sqs.eu-west-1.amazonaws.com/930358522410/euprod-01_FW2ZTF8K3JpdfFlReltiohttps://eu-360.reltio.com/reltio/api/FW2ZTF8K3JpdfFl - APIhttps://eu-360.reltio.com/ui/FW2ZTF8K3JpdfFl/ - UIReltio Gateway Userpfe_mdm_apiRDMhttps://rdm.reltio.com/%s/ImsRdmCOMPANY/Internal ResourcesMongohttps://mongo-emea-prod-gbl-mdm-hub.COMPANY.com:27017Kafkahttp://kafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/,http://kafka-b2-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/,http://kafka-b3-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094/Kibanahttps://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/Elasticsearchhttps://elastic-emea-prod-gbl-mdm-hub.COMPANY.com/"
},
{
"title": "US Trade (FLEX)",
"pageID": "164470168",
"pageLink": "/pages/viewpage.action?pageId=164470168",
"content": ""
},
{
"title": "US Non PROD Cluster",
"pageID": "164470067",
"pageLink": "/display/GMDM/US+Non+PROD+Cluster",
"content": "Physical ArchitectureHostsIDIPHostnameDocker UserResource TypeSpecificationAWS RegionFilesystemDEV●●●●●●●●●●●●●amraelp00005781.COMPANY.commdmihnprEC2r4.2xlargeus-east750 GB - /app15 GB - /var/lib/dockerComponents & LogsENVHostComponentDocker nameDescriptionLogsOpen PortsDEVDEVManagerdevmdmsrv_mdm-manager_1Gateway API/app/mdmgw/dev-mdm-srv/manager/log8849, 9104DEVDEVBatch Channeldevmdmsrv_batch-channel_1Batch file processor, S3 poller/app/mdmgw/dev-mdm-srv/batch_channel/log9121DEVDEVPublisherdevmdmhubsrv_event-publisher_1Event publisher/app/mdmhub/dev-mdm-srv/event_publisher/log9106DEVDEVSubscriberdevmdmhubsrv_reltio-subscriber_1SQS Reltio event subscriber/app/mdmhub/dev-mdm-srv/reltio_subscriber/log9105DEVDEVConsoledevmdmsrv_console_1Hawtio console9999ENVHostComponentDocker nameDescriptionLogsOpen PortsTESTDEVManagertestmdmsrv_mdm-manager_1Gateway API/app/mdmgw/test-mdm-srv/manager/log8850, 9108TESTDEVBatch Channeltestmdmsrv_batch-channel_1Batch file processor, S3 poller/app/mdmgw/test-mdm-srv/batch_channel/log9111TESTDEVPublishertestmdmhubsrv_event-publisher_1Event publisher/app/mdmhub/test-mdm-srv/event_publisher/log9110TESTDEVSubscribertestmdmhubsrv_reltio-subscriber_1SQS Reltio event subscriber/app/mdmhub/test-mdm-srv/reltio_subscriber/log9109Back-End HostComponentDocker nameDescriptionLogsOpen PortsDEVFluentDfluentdEFK - FluentD/app/efk/fluentd/log24225DEVKibanakibanaEFK - Kibanadocker logs kibana5601DEVElasticsearchelasticsearchEFK - Elasticsearch/app/efk/elasticsearch/logs9200DEVPrometheusprometheusPrometheus Federation slave serverdocker logs prometheus9119DEVMongomongo_mongo_1Mongodocker logs mongo_mongo_127017DEVMongo Exportermongo_exporterMongo → Prometheus exporter/app/mongo_exporter/logs9120DEVMonstache Connectormonstache-connectorMongo → Elasticsearch exporter8095DEVKafkakafka_kafka_1Kafkadocker logs kafka_kafka_19093, 9094, 9101DEVKafka Exporterkafka_kafka_exporter_1Kafka → Prometheus exporterdocker logs kafka_kafka_exporter_19102DEVSQS Exportersqs-exporter-devSQS → Prometheus exporterdocker logs sqs-exporter-dev9122DEVCadvisorcadvisorDocker → Prometheus exporterdocker logs cadvisor9103DEVKongkong_kong_1API Manager/app/mdmgw/kong/kong_logs8000, 8443, 32774DEVKong - DBkong_kong-database_1Kong Cassandra databasedocker logs kong_kong-database_19042DEVZookeeperkafka_zookeeper_1Zookeeperdocker logs kafka_zookeeper_12181DEVNode Exporter(non-docker) node_exporterPrometheus node exportersystemctl status node_exporter9100CertificatesResourceCertificate LocationValid fromValid to Issued ToKibanahttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/efk/kibana/mdm-log-management-us-nonprod.COMPANY.com.cer22.02.201907.05.2022mdm-log-management-us-nonprod.COMPANY.comKong - APIhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/certs/mdm-ihub-us-nonprod.COMPANY.com.pem18.07.201817.07.2021CN = mdm-ihub-us-nonprod.COMPANY.comO = COMPANYKafka - Server Truststorehttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/ssl/server.truststore.jks10.07.202001.09.2026O = Default Company LtdST = Some-StateC = AUKafka - Server KeyStorehttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/ssl/server.keystore.jks10.07.202006.07.2022 CN = KafkaFlexOU = UnknownO = UnknownL = UnknownST = UnknownC = UnknownElasticsearchhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/dev_us/efk/esnode1/mdm-esnode1-us-nonprod.COMPANY.com.cer22.02.201921.02.2022mdm-esnode1-us-nonprod.COMPANY.comUnix groupsResource NameTypeDescriptionSupportuserComputer RoleLogin: mdmihnprName: SRVGBL-Pf6687993Uid: 27634358Gid: 20796763 <mdmihub>userUnix Role GroupRole: ADMIN_ROLEportsSecurity groupSG Name: PFE-SG-IHUB-APP-DEV-001http://btondemand.COMPANY.comSubmit ticket to GBL-BTI-IOD AWS FULL SUPPORTInternal ClientsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicFLEX US userflex_nprodExternal OAuth2Flex-MDM_client- "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCP"- "UPDATE_HCO"- "GET_ENTITIES"- "SCAN_ENTITIES"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "SAP"dev-out-full-flex-alltest-out-full-flex-alltest2-out-full-flex-alltest3-out-full-flex-allInternal HUB usermdm_test_userExternal OAuth2Flex-MDM_client- "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCP"- "UPDATE_HCO"- "GET_ENTITIES"- "DELETE_CROSSWALK"- "GET_RELATION"- "SCAN_ENTITIES"- "SCAN_RELATIONS"- "LOOKUPS"- "ENTITY_ATTRIBUTES_UPDATE"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "AddrCalc"- "SAP"- "HIN"- "DEAIntegration Batch Update userintegration_batch_userKey AuthN/A- "GET_ENTITIES"- "ENTITY_ATTRIBUTES_UPDATE"- "GENERATE_ID"- "CREATE_HCO"- "UPDATE_HCO"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "AddrCalc"dev-internal-integration-testsFLEX Batch Channel userflex_batch_devKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "FLEX"- "FLEXIDL"dev-internal-hco-create-flexflex_batch_testKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "FLEX"- "FLEXIDL"test-internal-hco-create-flexflex_batch_test2Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "FLEX"- "FLEXIDL"test2-internal-hco-create-flexflex_batch_test3Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "FLEX"- "FLEXIDL"test3-internal-hco-create-flexSAP Batch Channel usersap_batch_devKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "SAP"dev-internal-hco-create-sapsap_batch_testKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "SAP"test-internal-hco-create-sapsap_batch_test2Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "SAP"test2-internal-hco-create-sapsap_batch_test3Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "SAP"test3-internal-hco-create-sapHIN Batch Channel userhin_batch_devKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "HIN"dev-internal-hco-create-hinhin_batch_testKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "HIN"test-internal-hco-create-hinhin_batch_test2Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "HIN"test2-internal-hco-create-hinhin_batch_test3Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "HIN"test3-internal-hco-create-hinDEA Batch Channel userdea_batch_devKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "DEA"dev-internal-hco-create-deadea_batch_testKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "DEA"test-internal-hco-create-deadea_batch_test2Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "DEA"test2-internal-hco-create-deadea_batch_test3Key AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "DEA"test3-internal-hco-create-dea340B Batch Channel user340b_batch_devKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "340B"dev-internal-hco-create-340b340b_batch_testKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "340B"test-internal-hco-create-340b"
},
{
"title": "US DEV Services",
"pageID": "164469990",
"pageLink": "/display/GMDM/US+DEV+Services",
"content": "HUB EndpointsAPI & Kafka & S3Resource NameEndpointGateway API OAuth2 External - DEVhttps://mdm-ihub-us-nonprod.COMPANY.com:8443/dev-extPing Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Gateway API KEY auth - DEVhttps://mdm-ihub-us-nonprod.COMPANY.com:8443/devKafkaamraelp00005781.COMPANY.com:9094MDM HUB S3 s3://mdmnprodamrasp22124/MonitoringResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=us_dev&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=us_dev&var-topic=All&var-node=1&var-instance=amraelp00005781.COMPANY.com:9102Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=us_dev&var-node=amraelp00005781.COMPANY.com&var-port=9100Docker monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=us_dev&var-node=1JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=us_dev&var-component=batch_channel&var-node=1&var-instance=amraelp00005781.COMPANY.com:9121KongMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=us_dev&var-instance=amraelp00005781.COMPANY.com:9120&var-node_instance=amraelp00005781.COMPANY.com&var-interval=$__auto_interval_intervalLogsResource NameEndpointKibanahttps://mdm-log-management-us-trade-nonprod.COMPANY.com:5601/app/kibana (DEV prefixed dashboards)MDM SystemsReltio US DEV - keHVup25rN7ij3YResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/dev_keHVup25rN7ij3YReltiohttps://dev.reltio.com/ui/keHVup25rN7ij3Yhttps://dev.reltio.com/reltio/api/keHVup25rN7ij3YReltio Gateway UserIntegration_Gateway_US_UserRDMhttps://rdm.reltio.com/%s/aPYW1rxK6I1Op4y/Internal ResourcesResource NameEndpointMongomongodb://amraelp00005781.COMPANY.com:27107Kafkaamraelp00005781.COMPANY.com:9094Zookeeperamraelp00005781.COMPANY.com:2181Kibanahttps://amraelp00005781.COMPANY.com:5601/app/kibanaElasticsearchhttps://amraelp00005781.COMPANY.com:9200Hawtiohttp://amraelp00005781.COMPANY.com:9999/hawtio/#/login"
},
{
"title": "US TEST (QA) Services",
"pageID": "164469988",
"pageLink": "/display/GMDM/US+TEST+%28QA%29+Services",
"content": "HUB EndpointsAPI & Kafka & S3Resource NameEndpointGateway API OAuth2 External - TESThttps://mdm-ihub-us-nonprod.COMPANY.com:8443/test-extGateway API OAuth2 External - TEST2https://mdm-ihub-us-nonprod.COMPANY.com:8443/test2-extGateway API OAuth2 External - TEST3https://mdm-ihub-us-nonprod.COMPANY.com:8443/test3-extGateway API KEY auth - TESThttps://mdm-ihub-us-nonprod.COMPANY.com:8443/testGateway API KEY auth - TEST2https://mdm-ihub-us-nonprod.COMPANY.com:8443/test2Gateway API KEY auth - TEST3https://mdm-ihub-us-nonprod.COMPANY.com:8443/test3Ping Federatehttps://devfederate.COMPANY.com/as/introspect.oauth2Kafkaamraelp00005781.COMPANY.com:9094MDM HUB S3 s3://mdmnprodamrasp22124/LogsResource NameEndpointKibanahttps://mdm-log-management-us-trade-nonprod.COMPANY.com:5601/app/kibana (TEST prefixed dashboards)MDM SystemsReltio US TEST - cnL0Gq086PrguOdResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_cnL0Gq086PrguOd Reltiohttps://test.reltio.com/ui/cnL0Gq086PrguOdhttps://test.reltio.com/reltio/api/cnL0Gq086PrguOdReltio Gateway UserIntegration_Gateway_US_UserRDMhttps://rdm.reltio.com/%s/FENBHNkytefh9dB/ Reltio US TEST2 - JKabsuFZzNb4K6kResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_JKabsuFZzNb4K6kReltiohttps://test.reltio.com/ui/JKabsuFZzNb4K6khttps://test.reltio.com/reltio/api/JKabsuFZzNb4K6kReltio Gateway UserIntegration_Gateway_US_UserRDMhttps://rdm.reltio.com/%s/dhUp0Lm9NebmqB9/ Reltio US TEST3 - Yy7KqOqppDVzJpkResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/test_Yy7KqOqppDVzJpkReltiohttps://test.reltio.com/ui/Yy7KqOqppDVzJpkhttps://test.reltio.com/reltio/api/Yy7KqOqppDVzJpkReltio Gateway UserIntegration_Gateway_US_UserRDMhttps://rdm.reltio.com/%s/Q4rz1LUZ9WnpVoJ/ Internal ResourcesResource NameEndpointMongomongodb://amraelp00005781.COMPANY.com:27107Kafkaamraelp00005781.COMPANY.com:9094Zookeeperamraelp00005781.COMPANY.com:2181Kibanahttps://amraelp00005781.COMPANY.com:5601/app/kibanaElasticsearchhttps://amraelp00005781.COMPANY.com:9200Hawtiohttp://amraelp00005781.COMPANY.com:9999/hawtio/#/login"
},
{
"title": "US PROD Cluster",
"pageID": "164470064",
"pageLink": "/display/GMDM/US+PROD+Cluster",
"content": "Physical ArchitectureHostsIDIPHostnameDocker UserResource TypeSpecificationAWS RegionFilesystemPROD1●●●●●●●●●●●●●●amraelp00006207.COMPANY.commdmihpr EC2r4.xlarge us-east-1e500 GB - /app15 GB - /var/lib/dockerPROD2●●●●●●●●●●●●●●amraelp00006208.COMPANY.commdmihprEC2r4.xlarge us-east-1e500 GB - /app15 GB - /var/lib/dockerPROD3●●●●●●●●●●●●amraelp00006209.COMPANY.commdmihprEC2r4.xlarge us-east-1e500 GB - /app15 GB - /var/lib/dockerComponents & LogsHostComponentDocker nameDescriptionLogsOpen PortsPROD1, PROD2, PROD3Managermdmgw_mdm-manager_1Gateway API/app/mdmgw/manager/log9104, 8851PROD1Batch Channelmdmgw_batch-channel_1Batch file processor, S3 poller/app/mdmgw/batch_channel/log9107PROD1, PROD2, PROD3Publishermdmhub_event-publisher_1Event publisher/app/mdmhub/event_publisher/log9106PROD1, PROD2, PROD3Subscribermdmhub_reltio-subscriber_1SQS Reltio event subscriber/app/mdmhub/reltio_subscriber/log9105Back-EndHostComponentDocker nameDescriptionLogsOpen PortsPROD1, PROD2, PROD3ElasticsearchelasticsearchEFK - Elasticsearch/app/efk/elasticsearch/logs9200PROD1, PROD2, PROD3FluentDfluentdEFK - FluentD/app/efk/fluentd/logPROD3KibanakibanaEFK - Kibanadocker logs kibana5601PROD3PrometheusprometheusPrometheus Federation slave serverdocker logs prometheus9109PROD1, PROD2, PROD3Mongomongo_mongo_1Mongodocker logs mongo_mongo_127017PROD3Monstache Connectormonstache-connectorMongo → Elasticsearch exporterPROD1, PROD2, PROD3Kafkakafka_kafka_1Kafkadocker logs kafka_kafka_19101, 9093, 9094PROD1, PROD2, PROD3Kafka Exporterkafka_kafka_exporter_1Kafka → Prometheus exporterdocker logs kafka_kafka_exporter_19102PROD1, PROD2, PROD3CadvisorcadvisorDocker → Prometheus exporterdocker logs cadvisor9103PROD3SQS Exportersqs-exporterSQS → Prometheus exporterdocker logs sqs-exporter9108PROD1, PROD2, PROD3Kongkong_kong_1API Manager/app/mdmgw/kong/kong_logs8000, 8443, 32777PROD1, PROD2, PROD3Kong - DBkong_kong-database_1Kong Cassandra databasedocker logs kong_kong-database_17000, 9042PROD1, PROD2, PROD3Zookeeperkafka_zookeeper_1Zookeeperdocker logs kafka_zookeeper_12181, 2888, 3888PROD1, PROD2, PROD3Node Exporter(non-docker) node_exporterPrometheus node exportersystemctl status node_exporter9100CertificatesResourceCertificate LocationValid fromValid to Issued ToKibanahttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/efk/kibana/mdm-log-management-us-trade-prod.COMPANY.com.cer22.02.201921.02.2022mdm-log-management-us-trade-prod.COMPANY.comKong - APIhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/certs/mdm-ihub-us-trade-prod.COMPANY.com.pem04.01.202204.01.2024CN = mdm-ihub-us-trade-prod.COMPANY.comO = COMPANYKafka - Client Truststorehttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/ssl/client.truststore.jks01.09.201601.09.2026COMPANY Root CA G2Kafka - Server TruststorePROD1 - https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/ssl/server1.keystore.jksPROD2 - https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/ssl/server2.keystore.jksPROD3 - https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/ssl_certs/prod_us/ssl/server3.keystore.jks04.01.202204.01.2024CN = mdm-ihub-us-trade-prod.COMPANY.comO = COMPANYElasticsearchesnode1 - https://github.com/COMPANY/mdm-reltio-handler-env/tree/master/ssl_certs/prod_us/efk/esnode1esnode2 - https://github.com/COMPANY/mdm-reltio-handler-env/tree/master/ssl_certs/prod_us/efk/esnode2esnode3 - https://github.com/COMPANY/mdm-reltio-handler-env/tree/master/ssl_certs/prod_us/efk/esnode322.02.201921.02.2022mdm-esnode1-us-trade-prod.COMPANY.commdm-esnode2-us-trade-prod.COMPANY.commdm-esnode3-us-trade-prod.COMPANY.comUnix groupsResource NameTypeDescriptionSupportELBLoad BalancerReference LB Name: PFE-CLB-JIRA-HARMONY-PROD-001CLB name: PFE-CLB-MDM-HUB-TRADE-PROD-001DNS name: internal-PFE-CLB-MDM-HUB-TRADE-PROD-001-1966081961.us-east-1.elb.amazonaws.comuserComputer RoleComputer Role: UNIX-UNIVERSAL-AWSCBSDEV-MDMIHPR-COMPUTERS-U Login: mdmihprName: SRVGBL-mdmihprUID: 25084803GID: 20796763 <mdmihub>userUnix Role GroupUnix-mdmihubProd-URole: ADMIN_ROLEportsSecurity groupSG Name: PFE-SG-IHUB-APP-PROD-001http://btondemand.COMPANY.comSubmit ticket to GBL-BTI-IOD AWS FULL SUPPORTS3S3 Bucketmdmprodamrasp42095 (us-east-1)Username: SRVC-MDMIHPRConsole login: https://bti-aws-prod-hosting.signin.aws.amazon.com/consoleInternal ClientsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicInternal MDM Hub userpublishing_hubKey AuthN/A- "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCP"- "UPDATE_HCO"- "GET_ENTITIES"- "DELETE_CROSSWALK"- "GET_RELATION"- "SCAN_ENTITIES"- "SCAN_RELATIONS"- "LOOKUPS"- "ENTITY_ATTRIBUTES_UPDATE"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "AddrCalc"prod-internal-reltio-eventsInternal MDM Test usermdm_test_userExternal OAuth2MDM_client- "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCP"- "UPDATE_HCO"- "GET_ENTITIES"- "DELETE_CROSSWALK"- "GET_RELATION"- "SCAN_ENTITIES"- "SCAN_RELATIONS"- "LOOKUPS"- "ENTITY_ATTRIBUTES_UPDATE"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "AddrCalc"- "SAP"- "HIN"- "DEA"Integration Batch Update userintegration_batch_userKey AuthN/A- "GET_ENTITIES"- "ENTITY_ATTRIBUTES_UPDATE"- "GENERATE_ID"- "CREATE_HCO"- "UPDATE_HCO"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"- "AddrCalc"FLEX US userflex_prodExternal OAuth2Flex-MDM_client- "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCP"- "UPDATE_HCO"- "GET_ENTITIES"- "SCAN_ENTITIES"ALL- "FLEXProposal"- "FLEX"- "FLEXIDL"- "Calculate"prod-out-full-flex-allFLEX Batch Channel userflex_batchKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "FLEX"- "FLEXIDL"prod-internal-hco-create-flexSAP Batch Channel usersap_batchKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "SAP"prod-internal-hco-create-sapHIN Batch Channel userhin_batchKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "HIN"prod-internal-hco-create-hinDEA Batch Channel userdea_batchKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "DEA"prod-internal-hco-create-dea340B Batch Channel user340b_batchKey AuthN/A- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"ALL- "340B"prod-internal-hco-create-340b"
},
{
"title": "US PROD Services",
"pageID": "164469976",
"pageLink": "/display/GMDM/US+PROD+Services",
"content": "HUB EndpointsAPI & Kafka & S3Resource NameEndpointGateway API OAuth2 External - PRODhttps://mdm-ihub-us-trade-prod.COMPANY.com/gw-api-oauth-extGateway API OAuth2 - PRODhttps://mdm-ihub-us-trade-prod.COMPANY.com/gw-api-oauthGateway API KEY auth - PRODhttps://mdm-ihub-us-trade-prod.COMPANY.com/gw-apiPing Federatehttps://prodfederate.COMPANY.com/as/introspect.oauth2Kafkaamraelp00006207.COMPANY.com:9094amraelp00006208.COMPANY.com:9094amraelp00006209.COMPANY.com:9094MDM HUB S3 s3://mdmprodamrasp42095/- FLEX: PROD/inbound/FLEX- SAP: PROD/inbound/SAP- HIN: PROD/inbound/HIN- DEA: PROD/inbound/DEA- 340B: PROD/inbound/340BMonitoringResource NameEndpointHUB Performancehttps://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance?orgId=1&refresh=30s&var-env=us_prod&var-node=All&var-type=entitiesKafka Topics Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&var-env=us_prod&var-topic=All&var-node=1&var-instance=amraelp00006207.COMPANY.com:9102Host Statisticshttps://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statistics?orgId=1&refresh=10s&var-job=node_exporter&var-env=us_prod&var-node=amraelp00006207.COMPANY.com&var-port=9100Docker monitoringhttps://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoring?orgId=1&refresh=10s&var-env=us_prod&var-node=1JMX Overviewhttps://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overview?orgId=1&refresh=10s&var-env=us_prod&var-component=batch_channel&var-node=1&var-instance=amraelp00006207.COMPANY.com:9107KongMongoDBhttps://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodb?orgId=1&refresh=10s&var-env=us_prod&var-instance=amraelp00006209.COMPANY.com:9110&var-node_instance=amraelp00006209.COMPANY.com&var-interval=$__auto_interval_intervalLogsResource NameEndpointKibanahttps://mdm-log-management-us-trade-prod.COMPANY.com:5601/app/kibanaMDM SystemsReltio US PROD - VUUWV21sflYijwaResource NameEndpointSQS queue namehttps://sqs.us-east-1.amazonaws.com/930358522410/361_VUUWV21sflYijwaReltiohttps://361.reltio.com/ui/VUUWV21sflYijwa/https://361.reltio.com/reltio/api/VUUWV21sflYijwa Reltio Gateway UserIntegration_Gateway_US_UserRDMhttps://rdm.reltio.com/%s/f6dQoR9tfCpFCtm/Internal ResourcesResource NameEndpointMongomongodb://amraelp00006207.COMPANY.com:27017,amraelp00006208.COMPANY.com:27017,amraelp00006209.COMPANY.com:28018Kafkaamraelp00006207.COMPANY.com:9094amraelp00006208.COMPANY.com:9094amraelp00006209.COMPANY.com:9094Zookeeperamraelp00006207.COMPANY.com:2181amraelp00006208.COMPANY.com:2181amraelp00006209.COMPANY.com:2181Kibanahttps://amraelp00006209.COMPANY.com:5601/app/kibanaElasticsearchhttps://amraelp00006207.COMPANY.com:9200https://amraelp00006208.COMPANY.com:9200https://amraelp00006209.COMPANY.com:9200Hawtiohttp://amraelp00006207.COMPANY.com:9999/hawtio/#/loginhttp://amraelp00006208.COMPANY.com:9999/hawtio/#/loginhttp://amraelp00006209.COMPANY.com:9999/hawtio/#/login"
},
{
"title": "Components",
"pageID": "164469881",
"pageLink": "/display/GMDM/Components",
"content": ""
},
{
"title": "Apache Airflow",
"pageID": "164469951",
"pageLink": "/display/GMDM/Apache+Airflow",
"content": "DescriptionAirflow is platform created by Apache and designed to schedule workflows called dags.Airflow docs:https://airflow.apache.org/docs/apache-airflow/stable/index.htmlWe are using airflow on kubernetes with helm of official airflow helm chart: https://airflow.apache.org/docs/helm-chart/stable/index.htmlIn this architecture airflow consists of 3 main components:Scheduler - scheduling, monitoring and executing tasksWebserver - Airflow UIDatabase(PostgreSQL)InterfacesUI e.g. https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/homeREST API /api/v1/docs: https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.htmlFlowsFlows are configure in mdm-hub-cluster-env repository in ansible/inventory/${environment}/group_vars/gw-airflow-services/${dag_name}.yaml filesUsed flows are described in dags list"
},
{
"title": "API Gateway",
"pageID": "164469910",
"pageLink": "/display/GMDM/API+Gateway",
"content": "DescriptionKong (API Gateway) is the component used as the gateway for all API requests in the MDM HUB. This component exposes only one URL to the external clients, which means that all internal docker containers are secured and it is not possible to access them. This allows to track whole network traffic access in one place. Kong is the router that redirects requests to specific services using configured routes. Kong contains multiple additional plugins, these plugins are connected with the specific services and add additional security (Key-Auth, OAuth 2.0, Oauth2-External) or user management. Only Kong authorized users are allowed to execute specific operations in the HUB.Technology:Kong is a predefined component installed using a Docker container. Kong uses the Lua language and Nginx engine. (docker image: kong:1.1.1-centos)Kong stores the whole configuration in the Cassandra Database ( docker image: cassandra:3)Kong uses a customized plugin for the PingFederate token verification - OAuth 2.0 ExternalCode link: Kong: Kong Admin API DOCOauth2 External plugin: kong/mdm-external-oauth-pluginFlowsKong is responsible for the security, user management, and access layer to HUB: SecurityInterface NameTypeEndpoint patternDescriptionAdmin APIREST APIGET http://localhost:8001/Internal and secured PORT available only in the docker container used by kong to manage existing services, routes, plugins, consumers, certificatesExternal APIREST APIGET https://localhost:8443/External and secured PORT exposed to the ELB and accessed by clients. Dependent componentsComponentInterfaceFlowDescriptionCassandra - kong_kong-database_1TCP internal docker communicationN/Akong configuration databaseHUB MicroservicesREST internal docker communicationN/AThe route to all HUB microservices, required to expose API to external clients ConfigurationKong configuration is divided into 5 sections:1 ConsumersConfig ParameterDefault valueDescription- snowflake_api_user: create_or_update: False vars: username: snowflake_api_user plugins: - name: key-auth parameters: key: "{{ secret_kong_consumers.snowflake_api_user.key_auth.key }}"N/AConfiguration for the user with key-auth authentication - used only for the technical services users.All External OAuth2 users are configured in the 4.Routes Sections2 CertificatesConfig ParameterDefault valueDescription- gbl_mdm_hub_us_nprod: create_or_update: False vars: cert: "{{ lookup('file', '{{playbook_dir}}/ssl_certs/{{ env_name }}/certs/gbl-mdm-hub-us-nprod.COMPANY.com.pem') }}" key: "{{ lookup('file', '{{playbook_dir}}/ssl_certs/{{ env_name }}/certs/gbl-mdm-hub-us-nprod.key') }}" snis: - "gbl-mdm-hub-us-nprod.COMPANY.com" - "amraelp00007335.COMPANY.com" - "10.12.209.27"N/A Configuration of the SSL Certificate in the Kong.3 ServicesConfig ParameterDefault valueDescriptionkong_services: - create_or_update: False vars: name: "{{ kong_env }}-manager-service" url: "http://{{ kong_env }}mdmsrv_mdm-manager_1:8081" connect_timeout: 120000 write_timeout: 120000 read_timeout: 120000N/AKong Service - this is a main part of the configuration, this connects internally Kong with Docker container. Kong allows configuring multiple services with multiple routes and plugins.4 RoutesConfig ParameterDefault valueDescription- create_or_update: False vars: name: "{{ kong_env }}-manager-ext-int-api-oauth-route" service: "{{ kong_env }}-manager-service" paths: [ "/{{ kong_env }}-ext" ] methods: [ "GET", "POST", "PATCH", "DELETE" ]N/AExposes the route to the service. Clients using ELB have to add the path to the API invocation to access specified services. "-ext" suffix defines the API that used the External OAuth 2.0 plugin connected to the PingFederate. Configures the methods that the user is allowed to invoke. 5 PluginsConfig ParameterDefault valueDescription- create_or_update: False vars: name: key-auth route: "{{ kong_env }}-manager-int-api-route" config: hide_credentials: trueN/AThe type of plugin "key-auth" used for the internal or technical users that authenticate using a security key- create_or_update: False vars: name: mdm-external-oauth route: "{{ kong_env }}-manager-ext-int-api-oauth-route" config: introspection_url: "https://devfederate.COMPANY.com/as/introspect.oauth2" authorization_value: "{{ devfederate.secret_oauth2_authorization_value }}" hide_credentials: true users_map: - "e2a6de9c38be44f4a3c1b53f50218cf7:engage"N/AThe type of plugin "mdm-external-oauth" is a customized plugin used for all External Clients that are using tokens generated in the PingFederate.The configuration contains introspection_url - Ping API for token verification.The most important part of this configuration is the users_map The Key is the PingFedeate User, the Value is the HUB user configured in the services."
},
{
"title": "API Router",
"pageID": "196877505",
"pageLink": "/display/GMDM/API+Router",
"content": "DescriptionThe api router component is responsible for routing requests to regional MDM Hub services. Application exposes REST API to call MDM Hub services from different regions simultaneously. The component provides centralized authorization and authentication service and transaction log feature. Api router uses http4k library which is a lightweight  HTTP toolkit written in Kotlin that enables the serving and consuming of HTTP services in a functional and consistent way.Technologyjava 8,kotlin,spring bootCode link: api routerRequest flowComponentDescriptionAuthentication serviceauthenticates user by x-consumer-username headerRequest enricherdetects request sources, countries and roleAuthorization serviceauthorizes user permissions to role, countries and sourcesService callercalls MDM Hub services, tries 3 times in case of an exception,requests are routed to the appropriate mdm services based on the countries parameter, if the requests contains countries from multiple regions, different regional services are called, if the request contains no countries, default user or application country is setService response transformer and filtertransforms and/or filters service responses (e.g. data anonymization) depending on the defined request and/or response filtration parameters (e.g. header, http method, path)Response composercomposes responses from services, if multiple services responded, the response is concatenatedRequest enrichmentParameterMethodsourcescountriesrolecreate hcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE HCOupdate hcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_HCObatch create hcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedCREATE_HCObatch update hcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedUPDATE_HCOcreate hcprequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE_HCPupdate hcprequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_HCPbatch create hcprequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedCREATE_HCPbatch update hcprequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedUPDATE_HCPcreate mcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE_MCOupdate mcorequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_MCObatch create mcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedCREATE_MCObatch update mcorequest body crosswalk attributes, required at least onerequest body Country attribute, only one allowedUPDATE_MCOcreate entityrequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedCREATE_ENTITYupdate entityrequest body crosswalk attribute, only one allowedrequest body Country attribute, only one allowedUPDATE_ENTITYget entities by urissources not allowedrequest param Country attribute, 0 or more allowedGET_ENTITIESget entity by urisources not allowedrequest param Country attribute, 0 or more allowedGET_ENTITIESdelete entity by crosswalktype query param, required at least onerequest param Country attribute, 0 or more allowedDELETE_CROSSWALKget entity matchessources not allowedrequest param Country attribute, 0 or more allowedGET_ENTITY_MATCHEScreate relationrequest body crosswalk attributes, required at least onerequest param Country attribute, 0 or more allowedCREATE_RELATIONbatch create relationrequest body crosswalk attributes, required at least onerequest param Country attribute, 0 or more allowedCREATE_RELATIONget relation by urisources not allowedrequest param Country attribute, 0 or more allowedGET_RELATIONdelete relation by crosswalktype query param, required at least onerequest param Country attribute, 0 or more allowedDELETE_CROSSWALKget lookupssources not allowedrequest param Country attribute, 0 or more allowedLOOKUPSConfigurationConfig parameterDescriptiondefaultCountrydefault application instance countryusersusers configuration listed belowzoneszones configuration listed belowresponseTransformresponse transformation definitions explained belowUser configurationConfig parameterDescriptionnameuser namedescriptionuser descriptionrolesallowed user rolescountriesallowed user countriessourcesallowed user sourcesdefaultCountryuser default countryZone configurationConfig parameterDescriptionurlmdm service urluserNamemdm service user namelogMessagesflag indicates that mdm service messages should be loggedtimeoutMsmdm service request timeoutResponse transformation configurationConfig parameterDescriptionfiltersrequest and response filter configuationmapresponse body JSLT transformation definitionsFilters configurationConfig parameterDescriptionrequestrequest filter configuationresponseresponse filter configuationRequest filter configurationConfig parameterDescriptionmethodHTTP methodpathAPI REST call pathheaderslist of HTTP headers with name and value parametersResponse filter configurationConfig parameterDescriptionbodyresponse body JSTL transformation definitionExample configuration of response transformationAPI router configurationresponseTransform: - filters:      request:        method: GET        path: /entities.*        headers: - name: X-Consumer-Username            value: mdm_test_user      response:        body:          jstl.content: | contains(true,[for (.crosswalks) .type == "configuration/sources/HUB_CALLBACK"])    map: - jstl.content: | .crosswalks - jstl.content: | ."
},
{
"title": "Batch Service",
"pageID": "164469936",
"pageLink": "/display/GMDM/Batch+Service",
"content": "DescriptionThe batch-service component is responsible for managing the batch loads to MDM Systems. It exposes the REST API that clients use to create a new instance of a batch and upload data. The component is responsible for managing the batch instances and stages, processing the data, gathering acknowledge responses from the Manager component. Batch service stores data in two collections batchInstance - stores all instances of batches and statistics gathered during load and batchEntityProcessStatus  - stores metadata information about all objects that were loaded through all batches. These two collections are required to manage and process the data, check the checksum deduplication process, mark entities as processed after ACK from Reltio, and soft-delete entities in case of full files load. The component uses the Asynchronous operations using Kafka topics as the stages for each part of the load. Technology:  java 8, spring boot, mongodb, kafka-streams, apache camel, kafka, shedlock-spring, spring-schedulerCode link: batch-serviceFlowsETL BatchesBatch Controller: creating and updating batch instanceBulk Service: loading bulk dataProcessing JOBSending JOBSoftDeleting JOBACK CollectorClear CacheExposed interfacesBatch Controller - manage batch instancesInterface NameTypeEndpoint patternDescriptionCreate a new instance for the specific batchREST APIPOST /batchController/{batchName}/instancesCreates a new instance of the specific batch. Returns the object of Batch with a generated ID that has to be used in the all below requests. Based on the ID client is able to check the status or load data using this instance. It is not possible to start new batch instance once the previous one is not completed. Get batch instance detailsREST APIGET /batchController/{batchName}/instances/{batchInstanceId}Returns current details about the specific batch instance. Returns object with all stages, statuses, and statistics. Initialize the stage or complete the stage and save statistics in the cache. REST APIPOST /batchController/{batchName}/instances/{batchInstanceId}/stages/{stageName}Creates or updates the specific stage in the batch. Using this operation clients are able to do two things.1. initialize and start the stage before loading the data. In that case, the Body request should be empty.2. update and complete the stage after loading the data. In that case, the Body should contain the stage name and statistics.Clients have permission to update only "Loading" stages. The next stages are managed by the internal batch-service processes.Initialize multiple stages or complete the stages and save statistics in the cache. REST APIPOST /batchController/{batchName}/instances/{batchInstanceId}/stagesThis operation is similar to the single-stage management operation. This operation allows manage of multiple stages in one request.Remove the specific batch instance from the cache.REST APIDELETE /batchController/{batchName}/instances/{batchInstanceId}Additional service operation used to delete the batch instances from cache. The permission for this operation is not exposed to external clients, this operation is used only by the HUB support team. Clear cache ( clear objects from batchEntityProcessStatus collection that stores metada of objects and is used in deduplication logic)REST APIGET /batchController/{batchName}/_clearCacheheaders:  objectType: ENTITY/RELATION  entityType: e.g. configuration/entityTypes/HCPAdditional service operation used to clear cache for the specific batch. The user can provide additional parameters to the API to specify what type of objects should be removed from the cache. Operation is used by the clients after executing smoke tests on PROD and during testing on DEV environments. It allows clearing the cache after load to avoid data deduplication during load. Bulk Service - load data using previously created batch instancesInterface NameTypeEndpoint patternDescriptionLoad multiple entities using create operationREST APIPOST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entitiesThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load entities to the MDM system. The operation accepts the bulk of entities and loads the data to Kafka topic. Using POST operation the standard creates operation is used.Load multiple entities using the partial override operationREST APIPATCH /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entitiesThis operation is similar to the above. The PATCH operation force to use partialOverride operation. Load multiple relations using create operationREST APIPOST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/relationsThe operation is similar to the above. Using POST operation the standard creates operation is used. Using /relations suffix in the URI clients is able to create relations objects in MDM.Load multiple Tags using PATCH operation - append operationREST APIPATCH /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/tagsThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load tags to the MDM system. The operation accepts the bulk of entities and loads the data to Kafka topic. Using PATCH operation the standard append operation is used so all tags in the input array are added to specified profile in MDM.Load multiple Tags using delete operation - removal operationREST APIDELETE /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/tagsThis operation is similar to the above. The DELETE operation removes selected TAGS from the MDM system.Load multiple merge requests using POST operation, this will result in a merge between two entities.REST APIPOST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entities/_mergeThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load merge requests to the MDM system - this will result in merging operation between two entities specified in the request. The operation accepts the bulk of merging requests and loads the data to Kafka's topic. Load multiple unmerge requests using POST operation, this will result in a unmerge between two entities.REST APIPOST /bulkService/{batchName}/instances/{batchInstanceId}/stages/{stageName}/entities/_unmergeThe operation should be used once the user created a new batch instance and initialized the "Loading" stage. At that moment client is able to load unmerge requests to the MDM system - this will result in unmerging operation between two entities specified in the request. The operation accepts the bulk of unmerging requests and loads the data to Kafka's topic. Dependent componentsComponentInterfaceFlowDescriptionManagerAsyncMDMManagementServiceRouteEntitiesCreateProcess bulk objects with entities and creates the HCP/HCO/MCO in MDM. Returns asynchronous ACK responseEntitiesUpdateProcess entities and creates using partialOverride property the HCP/HCO/MCO in MDM. Returns asynchronous ACK responseRelationsCreateProcess bulk objects with entities and creates the HCP/HCO/MCO in MDM. Returns asynchronous ACK responseHub StoreMongo connectionN/AStore cache data in mongo collectionConfigurationBatch Workflows configuration, main config for all Batches and StagesConfig ParameterDescriptionbatchWorkflows: - batchName: "ONEKEY" batchDescription: "ONEKEY - HCO and HCP entities and relations loading" stages: - stageName: "HCOLoading"The main part of the batches configuration. Each batch has to contain:batchName - the name of the specific batch, used in the API request.batchDescription - additional description for the specificstages - the list of dependent stages arranged in the execution sequence.This configuration presents the workflow for the specific batch, Administrator can setup these stages in the order that is required for the batch and Client requirements. The main assumptions:The "Loading" Stage is the first one always.The "Sending" Stage is dependent on the "Loading" stageThe "Processing" Stage is dependent on the "Sending" stage.There is the possibility to add 2 additional optional stages:"EntitiesUnseenDeletion" - used only once the full file is loaded and the soft-delete process is required"HCODeletesProcessing" - process soft-deleted objects to check if all ACKs were received. Available jobs:SendingJobProcessingJobDeletingJobDeletingRelationJobIt is possible to set up different stage names but the assumption is to reuse the existing names to keep consistency.The JOB is dependent on each other in two ways:softDependentStages - allows starting next stage immediately after the dependent one is started. Used in the Sending stages to immediately send data to the Manager.dependentStages - hard dependent stages, this blocks the starting of the stage until the previous one is ended.  - stageName: "HCOSending"softDependentStages: ["HCOLoading"]processingJobName: "SendingJob"Example configuration of Sending stage dependent from the Loading stage. In this stage, data is taken from the stage Kafka Topics and published to the Manager component for further processing- stageName: "HCOProcessing"dependentStages: ["HCOSending"]processingJobName: "ProcessingJob"Example configuration of the Processing stage. This stage starts once the Sending JOB is completed. It uses the batchEntityProcessStatus collection to check if all ACK responses were received from MDM. - stageName: "RelationLoading"- stageName: "RelationSending" dependentStages: [ "HCOProcessing"] softDependentStages: ["RelationLoading"] processingJobName: "SendingJob"- stageName: "RelationProcessing" dependentStages: [ "RelationSending" ] processingJobName: "ProcessingJob"The full example configuration for the Relation loading, sending, and processing stages.- stageName: "EntitiesUnseenDeletion" dependentStages: ["RelationProcessing"] processingJobName: "DeletingJob"- stageName: "HCODeletesProcessing" dependentStages: ["EntitiesUnseenDeletion"] processingJobName: "ProcessingJob"Configuration for entities. The example configuration that is used in the full files. It is triggered at the end of the Workflow and checks the data that should be removed. - stageName: "RelationsUnseenDeletion" dependentStages: ["HCODeletesProcessing"] processingJobName: "DeletingRelationJob"- stageName: "RelationDeletesProcessing" dependentStages: ["RelationsUnseenDeletion"] processingJobName: "ProcessingJob"Configuration for relations. The example configuration that is used in the full files. It is triggered at the end of the Workflow and checks the data that should be removed. Loading stage configuration for Entities and Relations BULK load through API requestConfig ParameterDescriptionbulkConfiguration: destinations: "ONEKEY": HCPLoading: bulkLimit: 25 destination: topic: "{{ env_local_name }}-internal-batch-onekey-hcp"The configuration contains the following:destinations - list of batches and kafka topics on which data should be loaded from REST API to Kafka Topics."ONEKEY" - batch nameHCPLoading - specific configuration for loading stagebulkLimit - limit of entities/relations in one API calldestination.topic - target topic nameSending stage configuration for Sending Entities and Relations to MDM Async API (Reltio)Config ParameterDefault valueDescriptionsendingJob: numberOfRetriesOnError: 3Number of retries once an exception occurs during Kafka events publishing  pauseBetweenRetriesSecs: 30Number of seconds to wait between the next retry idleTimeWhenProcessingEndsSec: 60Number of seconds once to wait for new events and complete the Sending JOB threadPoolSize:2Number of threads used to Kafka Producer "ONEKEY": HCPSending: source: topic: "{{ env_local_name }}-internal-batch-onekey-hcp" bulkSending: false bulkPacketSize: 10 reltioRequestTopic: "{{ env_local_name }}-internal-async-all-onekey" reltioReponseTopic: "{{ env_local_name }}-internal-async-all-onekey-ack"The specific configuration for Sending Stage"ONEKEY" - batch nameHCPSending - specific configuration for sending stagesource.topic- source topic name from which data is consumedbulkSending - by default false (bundling is implemented and managed in Manager client, currently there is no need to bundle the events on client-side)bulkPacketSize - optionally once bulkSending is true, batch-service is able to bundle the requests. reltioRequestTopic- processing requests in managerreltioReponseTopic - processing ACK in batch-serviceProcessing stage config for checking processing entities status in MDM Async API (Reltio) - check ACK collectorConfig ParameterDefault valueDescriptionprocessingJob.pauseBetweenQueriesSecs:60Interval in which Cache is cached if all ACK were received.Entities/Relations UnseenDeletion Job config for Reltio Request Topic and Max Deletes Limit for entities soft Delete.Config ParameterDefault valueDescriptiondeletingJob: "Symphony": "EntitiesUnseenDeletion":The specific configuration for Deleting Stage"Symphony" - batch nameEntitiesUnseenDelettion- specific configuration for soft-delete stagemaxDeletesLimit: 100The limit is a safety switch in case if we get a corrupted file (empty or partial).It prevents from deleting all profiles Reltio in such cases.queryBatchSize: 10The number of entities/relations downloaded from Cache in one callreltioRequestTopic: "{{ env_local_name }}-internal-async-all-symphony"target topic - processing requests in managerreltioResponseTopic: "{{ env_local_name }}-internal-async-all-symphony-ack"ack topics - processing ACK in batch-serviceUsersConfig ParameterDescription- name: "mdmetl_nprod" description: "MDMETL Informatica IICS User - BATCH loader" defaultClient: "ReltioAll" roles: - "CREATE_HCP" - "CREATE_HCO" - "CREATE_MCO" - "CREATE_BATCH" - "GET_BATCH" - "MANAGE_STAGE" - "CLEAR_CACHE_BATCH" countries: - US sources: - "SHS"... batches: "Symphony": - "HCPLoading"The example ETL user configuration. The configuration is divided into the following sections:roles - available roles to create specific objects and manage batch instancescountries - list of countries that user is allowed to loadsources - list of sources that user is allowed to loadbatches - list of batch names with corresponding stages. In general external users are able to create/edit Loading stages only.ConnectionsConfig ParameterDescriptionmongo.url: "mongodb://mdm_batch_service:{{ mongo.users.mdm_batch_service.password }}@{{ mongo.springURL }}/{{ mongo.dbName }}"Full Mongo DB URLmongo.dbName: "{{ mongo.dbName }}"Mongo database namekafka.servers: "{{ kafka.servers }}"Kafka Hostname kafka.groupId: "batch_service_{{ env_local_name }}"Batch Service component group namekafka.saslMechanism: "{{ kafka.saslMechanism }}"SASL configrrationkafka.securityProtocol: "{{ kafka.securityProtocol }}"Security Protocolkafka.sslTruststoreLocation: /opt/mdm-gw-batch-service/config/kafka_truststore.jksSSL trustore file locationkafka.sslTruststorePassword: "{{ kafka.sslTruststorePassword }}"SSL trustore file passowrdkafka.username: batch_serviceKafka usernamekafka.password: "{{ hub_broker_users.batch_service }}"Kafka dedicated user passwordkafka.sslEndpointAlgorithm:SSL algorightAdvanced Kafka configuration (do not edit if not required)Config Parameterspring: kafka: properties: sasl: mechanism: ${kafka.saslMechanism} security: protocol: ${kafka.securityProtocol} ssl.endpoint.identification.algorithm: consumer: properties: max.poll.interval.ms: 600000 bootstrap-servers: - ${kafka.servers} groupId: ${kafka.groupId} auto-offset-reset: earliest max-poll-records: 50 fetch-max-wait: 1s fetch-min-size: 512000 enable-auto-commit: false ssl: trustStoreLocation: file:${kafka.sslTruststoreLocation} trustStorePassword: ${kafka.sslTruststorePassword} producer: bootstrap-servers: - ${kafka.servers} groupId: ${kafka.groupId} auto-offset-reset: earliest ssl: trustStoreLocation: file:${kafka.sslTruststoreLocation} trustStorePassword: ${kafka.sslTruststorePassword} streams: bootstrap-servers: - ${kafka.servers} applicationId: ${kafka.groupId}_ack # for Kafka Streams GroupID have to different that Kafka consumer clientId: batch_service_ID stateDir: /tmp # num-stream-threads: 1 - default 1 ssl: trustStoreLocation: file:${kafka.sslTruststoreLocation} trustStorePassword: ${kafka.sslTruststorePassword}Additional config (do not edit if not required)Config Parameterserver.port: 8083management.endpoint.shutdown.enabled=false:management.endpoints.web.exposure.include: prometheus, health, infospring.main.allow-bean-definition-overriding: truecamel.springboot.main-run-controller: Truecamel: component: metrics: metric-registry=prometheusMeterRegistry:server: use-forward-headers: true forward-headers-strategy: FRAMEWORKspringdoc: swagger-ui: disable-swagger-default-url: TruerestService: #service port - do not change if it run in docker container port: 8082schedulerTreadCount: 5"
},
{
"title": "Callback Delay Service",
"pageID": "322536130",
"pageLink": "/display/GMDM/Callback+Delay+Service",
"content": "DescriptionThe application consists of two streams - precallback and postcallback. When the precallback stream detects the need to change the ranking for a given relationship, it generates an event to the post callback stream. The post callback stream collects events in the time window for a given key and processes the last one. This allows you to avoid updating the rankings multiple times when loading relations using batch.Responsible for following transformations:HCO relation rakingApplies transformations to the Kafka input stream producing the Kafka output stream.Technology: kotlin, spring boot, MongoDB, Kafka-StreamsCode link: callback-delay-service FlowsOtherHCOtoHCOAffiliations RankingsExposed interfacesPreCallbackDelay Stream -(rankings)Interface NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-reltio-full-delay-eventsEvents processed by the precallback serviceoutput  - callbacksKAFKA${env}-internal-reltio-proc-eventsResult events processed by the precallback delay serviceoutput - processing KAFKA${env}-internal-async-all-bulk-callbacksUpdateAttribute requests sent to Manager component for asynchronous processingDependent componentsComponentInterfaceFlowDescriptionManagerAsyncMDMManagementServiceRouteRelationshipAttributesUpdateUpdate relationship attributes in asynchronous modeHub StoreMongo connectionN/AGet mongodb stored relation data when Kafka cache is empty.ConfigurationMain ConfigurationDefault valueDescriptionkafka.groupId${env}-precallback-delay-serviceThe application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"kafkaOther.num.stream.threads10Number of threads used in the Kafka StreamkafkaOther.default.deserialization.exception.handlercom.COMPANY.mdm.common.streams.StructuredLogAndContinueExceptionHandlerDeserialization exception handlerkafkaOther.max.poll.interval.ms3600000Number of milliseconds to wait max time before next poll of eventskafkaOther.max.request.size2097152Events message sizeCallbackWithDelay Stream -(rankings)Config ParameterDefault valueDescriptionpreCallbackDelay.eventInputTopic${env}-internal-reltio-full-delay-eventsinput topicpreCallbackDelay.eventDelayTopic${env}-internal-reltio-full-callback-delay-eventsdelay stream input topic, when the precallback stream detects the need to modify ranks for a given relationship group, it produces an event for this topic. Events for a given key are aggregated in a time windowpreCallbackDelay.eventOutputTopic${env}-internal-reltio-proc-eventsoutput topic for eventspreCallbackDelay.internalAsyncBulkCallbacksTopic${env}-internal-async-all-bulk-callbacksoutput topic for callbackspreCallbackDelay.relationDataStore.storeName${env}-relation-data-storeRelation data cache store namepreCallbackDelay.rankCallback.featureActivationtrueParameter used to enable/disable the Rank featurepreCallbackDelay.rankCallback.callbackSourceHUB_CALLBACKCrosswalk used to update Reltio with Rank attributespreCallbackDelay.rankCallback.rawRelationChecksumDedupeStore.namewith-delay-raw-relation-checksum-dedupe-storetopic name that store rawRelation MD5 checksum - used in rank callback deduplicationpreCallbackDelay.rankCallback.rawRelationChecksumDedupeStore.retentionPeriod1hstore retention periodpreCallbackDelay.rankCallback.rawRelationChecksumDedupeStore.windowSize10mstore window sizepreCallbackDelay.rankCallback.attributeChangesChecksumDedupeStore.nameattribute-changes-checksum-dedupe-storetopic name that store attribute changes MD5 checksum - used in rank callback deduplicationpreCallbackDelay.rankCallback.attributeChangesChecksumDedupeStore.retentionPeriod1hstore retention periodpreCallbackDelay.rankCallback.attributeChangesChecksumDedupeStore.windowSize10mstore window sizepreCallbackDelay.rankCallback.activeCallbacksOtherHCOtoHCOAffiliationsDelayCallbackList of Ranker to be activatedpreCallbackDelay.rankTransform.featureActivationtrueParaemter defines in the Rank feature should be activated.preCallbackDelay.rankTransform.activationFilter.activeRankSorterOtherHCOtoHCOAffiliationsDelayRankSorterRank sorter namespreCallbackDelay.rankTransform.rankSortOrder.affiliationN/AThe source order defined for the specific Ranking. Details about the algorithm in:  OtherHCOtoHCOAffiliations RankSorterdeduplicationPost callback stream ddeduplication configdeduplication.pingInterval1mPost callback stream ping invervaldeduplication.duration1hPost callback stream window durationdeduplication.gracePeriod0sPost callback stream deduplication grace perioddeduplication.byteLimit122869944Post callback stream deduplication byte limitdeduplication.suppressNamecallback-rank-delay-suppressPost callback stream deduplication suppress namededuplication.namecallback-rank-delay-suppressPost callback stream deduplication namededuplication.storeNamecallback-rank-delay-suppress-deduplication-storePost callback stream deduplication store nameRank sort order config:The component allows you to set different sorting (ranking) configurations depending on the country of the relationship. Relations for selected countries are sorted based on the rankExecutionOrder configuration - in the order of the items on the list. The following sorters are available:ATTRIBUTE - sort relationships based on the values (or lookup codes) of defined attributesACTIVE - sort relationships based on their status (ACTIVE, NON-ACTIVE)SOURCE - sort relations based on the order of sourcesLUD - sort relations based on their update time - ascending or descending orderSample rankSortOrder confiugration:rankSortOrder: affiliation: config: - countries: - AU - NZ rankExecutionOrder: - type: ACTIVE - type: ATTRIBUTE attributeName: RelationType/RelationshipDescription lookupCode: true order: REL.HIE: 1 REL.MAI: 2 REL.FPA: 3 REL.BNG: 4 REL.BUY: 5 REL.PHN: 6 REL.GPR: 7 REL.MBR: 8 REL.REM: 9 REL.GPSS: 10 REL.WPC: 11 REL.WPIC: 12 REL.DOU: 13 - type: SOURCE order: Reltio: 1 ONEKEY: 2 JPDWH: 3 SAP: 4 PFORCERX: 5 PFORCERX_ODS: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 GRV: 9 GCP: 10 SSE: 11 PCMS: 12 PTRS: 13 - type: LUD"
},
{
"title": "Callback Service",
"pageID": "164469913",
"pageLink": "/display/GMDM/Callback+Service",
"content": "DescriptionResponsible for following transformations:HCO names calculationDangling affiliationsCrosswalk cleanerPotential match queue cleanerPrecallback stream - (rankings)Applies transformations to the Kafka input stream producing the Kafka output stream.Technology: java 8, spring boot, MongoDB, Kafka-StreamsCode link: callback-service FlowsCallbacksHCONames Callback for IQVIA modelDanglingAffiliations CallbackCrosswalkCleaner CallbackNotMatch CallbackPreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType)Exposed interfacesPreCallback Stream -(rankings)Interface NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-reltio-full-eventsEvents enriched by the EntityEnricher component. Full JSON dataoutput  - callbacksKAFKA${env}-internal-reltio-proc-eventsEvents that are already processed by the precallback services (contains updated Ranks and Reltio callback is also processed)output - processing KAFKA${env}-internal-async-all-bulk-callbacksUpdateAttribute requests sent to Manager component for asynchronous processingHCO NamesInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-callback-hconame-inevents being sent by the event publisher component. Event types being considered:  HCO_CREATED, HCO_CHANGED, RELATIONSHIP_CREATED, RELATIONSHIP_CHANGEDcallback outputKAFKA${env}-internal-hconames-rel-createRelation Create requests sent to Manager component for asynchronous processingDanging AffiliationsInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-callback-orphanClean-inevents being sent by the event publisher component. Event types being considered:  'HCP_REMOVED', 'HCO_REMOVED', 'MCO_REMOVED', 'HCP_INACTIVATED', 'HCO_INACTIVATED', 'MCO_INACTIVATED'callback outputKAFKA${env}-internal-async-all-orphanCleanRelation Update (soft-delete) requests sent to Manager component for asynchronous processingCrosswalk CleanerInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-callback-cleaner-inevents being sent by the event publisher component. Event types being considered: 'HCO_CHANGED', 'HCP_CHANGED', 'MCO_CHANGED', 'RELATIONSHIP_CHANGED'callback outputKAFKA${env}-internal-async-all-cleaner-callbacksDelete Crosswalk or Soft-Delete requests sent to Manager component for asynchronous processingNotMatch callback (clean potential match queue)Interface NameTypeEndpoint patternDescriptioncallback inputKAFKA${env}-internal-callback-potentialMatchCleaner-inevents being sent by the event publisher component. Event types being considered:  'RELATIONSHIP_CHANGED', 'RELATIONSHIP_CREATED'callback outputKAFKA${env}-internal-async-all-notmatch-callbacksNotMatch requests sent to Manager component for asynchronous processingDependent componentsComponentInterfaceFlowDescriptionManagerMDMIntegrationServiceGetEntitiesByUrisRetrieve multiple entities by providing the list of entities URISAsyncMDMManagementServiceRouteRelationshipUpdateUpdate relationship object in asynchronous modeEntitiesUpdateUpdate entity object in asynchronous mode - set soft-deleteCrosswalkDeleteRemove Crosswalk from entity/relation in asynchronous modeNotMatchSet Not a Match between two  entitiesHub StoreMongo connectionN/AStore cache data in mongo collectionConfigurationMain ConfigurationDefault valueDescriptionkafka.groupId${env}-entity-enricherThe application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"kafkaOther.num.stream.threads10Number of threads used in the Kafka StreamkafkaOther.default.deserialization.exception.handlercom.COMPANY.mdm.common.streams.StructuredLogAndContinueExceptionHandlerDeserialization exception handlerkafkaOther.max.poll.interval.ms3600000Number of milliseconds to wait max time before next poll of eventskafkaOther.max.request.size2097152Events message sizegateway.apiKey${gateway.apiKey}API key used in the communication to Managergateway.logMessagesfalseParameter used to turn on/off logging the payloadgateway.url${gateway.url}Manager URLgateway.userName${gateway.userName}Manager user nameHCO NamesConfig ParameterDefault valueDescriptioncallback.hconames.eventInputTopic${env}-internal-callback-hconame-ininput topiccallback.hconames.HCPCalculateStageTopic${env}-internal-callback-hconame-hcp4calcinternal topiccallback.hconames.intAsyncHCONames${env}-internal-hconames-rel-createoutput topiccallback.hconames.deduplicationWindowDuration10The size of the windows in millisecondscallback.hconames.deduplicationWindowGracePeriod10sThe grace period to admit out-of-order events to a window.callback.hconames.dedupStoreNamehco-name-dedupe-storededuplication topic namecallback.hconames.acceptedEntityEventTypesHCO_CREATED, HCO_CHANGEDaccepted events types for entity objectscallback.hconames.acceptedRelationEventTypesRELATIONSHIP_CREATED, RELATIONSHIP_CHANGEDaccepted events types for relationship objectscallback.hconames.acceptedCountriesAI,AN,AG,AR,AW,BS,BB,BZ,BM,BO,BR,CL,CO,CR,CW,DO,EC,GT,GY,HN,JM,KY,LC,MX,NI,PA,PY,PE,PN,SV,SX,TT,UY,VGlist of countries aceppted in further processing callback.hconames.impactedHcpTraverseRelationTypesconfiguration/relationTypes/Activity, configuration/relationTypes/Managed, configuration/relationTypes/RLE.MAIaccepted relationship types to travers for impacted HCP objectscallback.hconames.mainHCOTraverseRelationTypesconfiguration/relationTypes/Activity, configuration/relationTypes/Managed, configuration/relationTypes/RLE.MAIaccepted relationship types to travers for impacted main HCO objectscallback.hconames.mainHCOTypeCodes.defaultHOSPthe Type code name for the Main HCO objectcallback.hconames.mainHCOStructurTypeCodese.g.: AD:- "WFR.TSR.JUR"- "WFR.TSR.GRN"- "WFR.TSR.ETA"Cotains the map where the:KEY is the country Values are the TypCodes for the corresponding country, callback.hconames.deduplicationeither callback.hconames.deduplication or callback.hconames.windowSessionDeduplication must be setcallback.hconames.deduplication.durationduration size of time windowcallback.hconames.deduplication.gracePeriodgrace period related to time windowcallback.hconames.deduplication.byteLimitbyte limit of Suppressed.BufferConfigcallback.hconames.deduplication.suppressNamename ofSuppressed.BufferConfigcallback.hconames.deduplication.namename of the Grouping step in deduplicationcallback.hconames.deduplication.storageNamewhen switching from callback.hconames.deduplication to callback.hconames.windowSessionDeduplication storageName must be differentname of Materialized Session Storecallback.hconames.deduplication.pingIntervalinterval in which ping messages are being generatedcallback.hconames.windowSessionDeduplicationeither callback.hconames.deduplication or callback.hconames.windowSessionDeduplication must be setcallback.hconames.windowSessionDeduplication.durationduration size of session windowcallback.hconames.windowSessionDeduplication.byteLimitbyte limit of Suppressed.BufferConfigcallback.hconames.windowSessionDeduplication.suppressNamename ofSuppressed.BufferConfigcallback.hconames.windowSessionDeduplication.namename of the Grouping step in deduplicationcallback.hconames.windowSessionDeduplication.storageNamewhen switching from callback.hconames.deduplication to callback.hconames.windowSessionDeduplication storageName must be differentname of Materialized Session Storecallback.hconames.windowSessionDeduplication.pingIntervalinterval in which ping messages are being generatedPfe HCO NamesConfig ParameterDefault valueDescriptioncallback.pfeHconames.eventInputTopic${env}-internal-callback-hconame-ininput topiccallback.pfeHconames.HCPCalculateStageTopic${env}-internal-callback-hconame-hcp4calcinternal topiccallback.pfeHconames.intAsyncHCONames${env}-internal-hconames-rel-createoutput topiccallback.pfeHconames.timeWindoweither callback.pfeHconames.timeWindow or callback.pfeHconames.sessionWindow must be setcallback.pfeHconames.timeWindow.durationduration size of time windowcallback.pfeHconames.timeWindow.gracePeriodgrace period related to time windowcallback.pfeHconames.timeWindow.byteLimitbyte limit of Suppressed.BufferConfigcallback.pfeHconames.timeWindow.suppressNamename ofSuppressed.BufferConfigcallback.pfeHconames.timeWindow.namename of the Grouping step in deduplicationcallback.pfeHconames.timeWindow.storageNamewhen switching from callback.pfeHconames.timeWindow to callback.pfeHconames.sessionWindow storageName must be differentname of Materialized Session Storecallback.pfeHconames.timeWindow.pingIntervalinterval in which ping messages are being generatedcallback.pfeHconames.sessionWindoweither callback.pfeHconames.timeWindow or callback.pfeHconames.sessionWindow must be setcallback.pfeHconames.sessionWindow.durationduration size of session windowcallback.pfeHconames.sessionWindow.byteLimitbyte limit of Suppressed.BufferConfigcallback.pfeHconames.sessionWindow.suppressNamename ofSuppressed.BufferConfigcallback.pfeHconames.sessionWindow.namename of the Grouping step in deduplicationcallback.pfeHconames.sessionWindow.storageNamewhen switching from callback.pfeHconames.deduplication to callback.pfeHconames.windowSessionDeduplication storageName must be differentname of Materialized Session Storecallback.pfeHconames.sessionWindow.pingIntervalinterval in which ping messages are being generatedDanging AffiliationsConfig ParameterDefault valueDescriptioncallback.danglingAffiliations.eventInputTopic${env}-internal-callback-orphanClean-ininput topiccallback.danglingAffiliations.acceptedEntityEventTypesHCP_REMOVED, HCO_REMOVED, MCO_REMOVED, HCP_INACTIVATED, HCO_INACTIVATED, MCO_INACTIVATEDaccepted entity eventscallback.danglingAffiliations.eventOutputTopic${env}-internal-async-all-orphanCleanoutput topiccallback.danglingAffiliations.relationUpdateHeaders.HubAsyncOperationrel-updatekafka record headercallback.danglingAffiliations.exceptCrosswalkTypesconfiguration/sources/Reltiocrosswalk types to excludeCrosswalk CleanerConfig ParameterDefault valueDescriptioncallback.crosswalkCleaner.eventInputTopic${env}-internal-callback-cleaner-ininput topiccallback.crosswalkCleaner.acceptedEntityEventTypesMCO_CHANGED, HCP_CHANGED, HCO_CHANGEDaccepted entity eventscallback.crosswalkCleaner.acceptedRelationEventTypesRELATIONSHIP_CHANGEDaccepted relation eventscallback.crosswalkCleaner.hardDeleteCrosswalkTypes.alwaysconfiguration/sources/HUB_CallbackHub callback crosswalk namecallback.crosswalkCleaner.hardDeleteCrosswalkTypes.exceptconfiguration/sources/ReltioCleanserReltio cleanser crosswalk namecallback.crosswalkCleaner.hardDeleteCrosswalkRelationTypes.alwaysconfiguration/sources/HUB_CallbackHub callback crosswalk namecallback.crosswalkCleaner.hardDeleteCrosswalkRelationTypes.exceptconfiguration/sources/ReltioCleanserReltio cleanser crosswalk namecallback.crosswalkCleaner.softDeleteCrosswalkTypes.alwaysconfiguration/sources/HUB_USAGETAGCrosswalks list to soft-deletecallback.crosswalkCleaner.softDeleteCrosswalkTypes.whenOneKeyNotExistsconfiguration/sources/IQVIA_PRDP, configuration/sources/IQVIA_RAWDEACrosswalk list to soft-delete when ONEKEY crosswalk does not existscallback.crosswalkCleaner.softDeleteCrosswalkTypes.exceptconfiguration/sources/HUB_CALLBACK, configuration/sources/ReltioCleanserCrosswalk to excludecallback.crosswalkCleaner.hardDeleteHeaders.HubAsyncOperationcrosswalk-deletekafka record headercallback.crosswalkCleaner.hardDeleteRelationHeaders.HubAsyncOperationcrosswalk-relation-deletekafka record headercallback.crosswalkCleaner.softDeleteHeaders.hcp.HubAsyncOperationhcp-updatekafka record headercallback.crosswalkCleaner.softDeleteHeaders.hco.HubAsyncOperationhco-updatekafka record headercallback.crosswalkCleaner.oneKeyconfiguration/sources/ONEKEYONEKEY crosswalk namecallback.crosswalkCleaner.eventOutputTopic${env}-internal-async-all-cleaner-callbacksoutput topiccallback.crosswalkCleaner.softDeleteOneKeyReferbackCrosswalkTypes.referbackLookupCodesHCPIT.RBI, HCOIT.RBIOneKey referback crosswalk lookup codescallback.crosswalkCleaner.softDeleteOneKeyReferbackCrosswalkTypes.oneKeyLookupCodesHCPIT.OK, HCOIT.OKOneKey crosswalk lookup codesNotMatch callback (clean potential match queue)Config ParameterDefault valueDescriptioncallback.potentialMatchLinkCleaner.eventInputTopic${env}-internal-callback-potentialMatchCleaner-ininput topiccallback.potentialMatchLinkCleaner.acceptedRelationEventTypes- RELATIONSHIP_CREATED- RELATIONSHIP_CHANGEDaccepted relation eventscallback.potentialMatchLinkCleaner.acceptedRelationObjectTypes- "configuration/relationTypes/FlextoHCOSAffiliations"- "configuration/relationTypes/FlextoDDDAffiliations"- "configuration/relationTypes/SAPtoHCOSAffiliations"accepted relationship typescallback.potentialMatchLinkCleaner.matchTypesInCache- "AUTO_LINK"- "POTENTIAL_LINK"PotentialMatch cache object typescallback.potentialMatchLinkCleaner.notMatchHeaders.hco.HubAsyncOperationentities-not-match-setkafka record headercallback.potentialMatchLinkCleaner.eventOutputTopic${env}-internal-async-all-notmatch-callbacksoutput topicPreCallback Stream -(rankings)Config ParameterDefault valueDescriptionpreCallback.eventInputTopic${env}-internal-reltio-full-eventsinput topicpreCallback.eventOutputTopic${env}-internal-reltio-proc-eventsoutput topic for eventspreCallback.internalAsyncBulkCallbacksTopic${env}-internal-async-all-bulk-callbacksoutput topic for callbackspreCallback.mdmIntegrationService.baseURLN/AManager URL defined per environemntpreCallback.mdmIntegrationService.apiKeyN/AManager secret API KEY defined per environemntpreCallback.mdmIntegrationService.logMessagesfalseParameter used to turn on/off logging the payloadpreCallback.skipEventTypesENTITY_MATCHES_CHANGED, ENTITY_AUTO_LINK_FOUND, ENTITY_POTENTIAL_LINK_FOUND, DCR_CREATED, DCR_CHANGED, DCR_REMOVEDEvents skipped in the processingpreCallback.oldEventsDeletion.maintainDuration10mCache duration time (for callbacks MD5 checksum)preCallback.oldEventsDeletion.interval5mCache deletion intervalpreCallback.rankCallback.featureActivationtrueParameter used to enable/disable the Rank featurepreCallback.rankCallback.callbackSourceHUB_CallbackCrosswalk used to update Reltio with Rank attributespreCallback.rankCallback.activationFilter.countriesAG, AI, AN, AR, AW, BB, BL, BM, BO, BR, BS, BZ, CL, CO, CR, CW, DE, DO, EC, ES, FR, GF, GP, GT, GY, HK, HN, ID, IN, IT, JM, JP, KY, LC, MC, MF, MQ, MX, MY, NL, NC, NI, PA, PE, PF, PH, PK, PM, PN, PY, RE, RU, SA, SG, SV, SX, TF, TH, TR, TT, TW, UY, VE, VG, VN, WF, YT, XX, EMPTYList of countries for wich process activates the Rank (different between GBL and GBLUS)preCallback.rankCallback.rawEntityChecksumDedupeStoreNameraw-entity-checksum-dedupe-storetopic name that store rawEntity MD5 checksum - used in rank callback deduplicationpreCallback.rankCallback.attributeChangesChecksumDedupeStoreNameattribute-changes-checksum-dedupe-storetopic name that store attribute changes MD5 checksum - used in rank callback deduplicationpreCallback.rankCallback.forwardMainEventsDuringPartialUpdatefalseThe parameter used to define if we want to forward partial events. By default it is false so only events that are fully calculated are sent furtherpreCallback.rankCallback.ignoreAndRemoveDuplicatesfalseThe parameter used in the Ranking may contain duplicities in the group. It is set to False because now Reltio is removing duplicated IdentifierpreCallback.rankCallback.activeCleanerCallbacksSpecialityCleanerCallback, IdentifierCleanerCallback, EmailCleanerCallback, PhoneCleanerCallbackList of cleaner callbacks to be activatedpreCallback.rankCallback.activeCallbacksSpecialityCallback, AddressCallback, AffiliationCallback, IdentifierCallback, EmailCallback, PhoneCallbackList of Ranker to be activatedpreCallback.rankTransform.featureActivationtrueParaemter defines in the Rank feature should be activated.preCallback.rankTransform.activationFilter.activeRankSorterSpecialtyRankSorter, AffiliationRankSorter, AddressRankSorter, IdentifierRankSorter, EmailRankSorter, PhoneRankSorterpreCallback.rankTransform.rankSortOrder.affiliationN/AThe source order defined for the specific Ranking. Details about the algorithm in:  Affiliation RankSorterpreCallback.rankTransform.rankSortOrder.phoneN/AThe source order defined for the specific Ranking. Details about the algorithm in: Phone RankSorterpreCallback.rankTransform.rankSortOrder.emailN/AThe source order defined for the specific Ranking. Details about the algorithm in: Email RankSorterpreCallback.rankTransform.rankSortOrder.specialitiesN/AThe source order defined for the specific Ranking. Details about the algorithm in: Specialty RankSorterpreCallback.rankTransform.rankSortOrder.identifierN/AThe source order defined for the specific Ranking. Details about the algorithm in: Identifier RankSorterpreCallback.rankTransform.rankSortOrder.addressSource.ReltioN/AThe source order defined for the specific Ranking. Details about the algorithm in: Address RankSorterpreCallback.rankTransform.rankSortOrder.addressesSource.ReltioN/AThe source order defined for the specific Ranking. Details about the algorithm in: Addresses RankSorter"
},
{
"title": "China Selective Router",
"pageID": "284812312",
"pageLink": "/display/GMDM/China+Selective+Router",
"content": "DescriptionThe china-selective-router component is responsible for enriching events and transformig from COMPANY model to Iqivia model. Component is using Asynchronous operation using kafka topics. To transform COMPANY object it needs to be consumed from input topic and based on configuration it is enriched, hco entity is connected with mainHco and as a last step event model is transformed to Iqivia model, after all operations event is sending to output topic.Technology:  java 11, spring boot, kafka-streams, kafkaCode link: china-selective-routerFlowsTransformation flowExposed interfacesInterface NameTypeEndpoint patternDescriptionEvent transformer topologyKAFKAtopic: {env}-{topic_postfix}Transform event from COMPANY model to Iqivia model, and send to ouptut topicDependent componentsComponentInterfaceFlowDescriptionData modelHCPModelConverterN/AConverter to transform Entity to COMPANY model or to Iqivia modelConfigurationConfig ParameterDescriptioneventTransformer: - country: "CN" eventInputTopic: "${env}-internal-full-hcp-merge-cn" eventOutputTopic: "${env}-out-full-hcp-merge-cn" enricher: com.COMPANY.mdm.event_transformer.enricher.ChinaRefEntityProcessor hcoConnector: processor: com.COMPANY.mdm.event_transformer.enricher.ChinaHcoConnectorProcessor transformer: com.COMPANY.mdm.event_transformer.transformer.COMPANYToIqviaEventTransformer refEntity: - type: HCO attribute: ContactAffiliations relationLookupAttribute: RelationType.RelationshipDescription relationLookupCode: CON - type: MainHCO attribute: ContactAffiliations relationLookupAttribute: RelationType.RelationshipDescription relationLookupCode: REL.MAIThe main part of china-selective-router configuration, contains list of event transformaton configurationcountry - specify country, value of this parameter have to be in event country section otherwise event will be skippedeventInputTopic - input topiceventOutputTopic - output topicenricher - specify class to enrich event, based on refEntity configuration this class is resposible for collecting related hco and mainHco entities.hcoConnector.processor - specify class to connect hco with main hco, in this class is made a call to reltio for all connections by hco uri. Based on received data is created additional attribute 'OtherHcoToHco' contains mainHco entity collected by enricher.hcoConnector.enabled - enable or disable hcoConnectorhcoConnector.hcoAttrName - specify additional attibute name to place connected mainHcohcoConnector.outRelations - specify the list of out relation to filter while calling reltio for hco connectionsrefEntity - contains list of attributes containing information about HCO or MainHCO entity (refEntity uri)refEntity.type - type of entity: HCO or MainHcorefEntity.attribute - base attribute to search for entityrefEntity.relationLookupAttribute - attribute to search for lookupCode to decide what entity we are looking forrefEntity.relationLookupCode - code specify entity type"
},
{
"title": "Component Template",
"pageID": "164469941",
"pageLink": "/display/GMDM/Component+Template",
"content": "Description<short description of the componet>Technology:Code link:Flows<List of realized flow with links to Flow section>Exposed interfacesInterface NameTypeEndpoint patternDescriptionREST API|KAFKADependent componentsComponentInterfaceFlowDescription<component name with link><Interface name><flow name with link>for whatConfigurationConfig ParameterDefault valueDescription"
},
{
"title": "DCR Service",
"pageID": "209949312",
"pageLink": "/display/GMDM/DCR+Service",
"content": ""
},
{
"title": "DCR Service 2",
"pageID": "218444525",
"pageLink": "/display/GMDM/DCR+Service+2",
"content": "DescriptionResponsible for the DCR processing. Client (PforceRx) sends the DCRs through REST API, DCRs are routed to the target system (OneKey/Veeva Opendata/Reltio). Client (Pforcerx) retrieves the status of the DCR using status API. Service also contains Kafka-streams functionality to process the DCR updates asynchronously and update the DCRRegistry cache.Services are accessible with REST API.Applies transformations to the Kafka input stream producing the Kafka output stream.Technology: java 8, spring boot, MongoDB, Kafka-StreamsCode link: dcr-service-2 FlowsPforceRx DCR flowsCreate DCRDCR state changeGet DCR statusOneKey: create DCR method (submitVR) - directOneKey: generate DCR Change Events (traceVR)OneKey: process DCR Change EventsVeeva: create DCR method (storeVR)Veeva: generate DCR Change Events (traceVR)Veeva: process DCR Change EventsReltio: create DCR method - directReltio: process DCR Change EventsExposed interfacesREST APIInterface NameTypeEndpoint patternDescriptionCreate DCRsREST APIPOST /dcrCreate DCRsGET DCRs statusREST APIGET /dcr/statusGET DCRs statusOneKey StreamInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA{env}-internal-onekey-dcr-change-events-inEvents generated by the OneKey component after OneKey DataSteward Action. Flow responsible for events generation is OneKey: generate DCR Change Events (traceVR)output  - callbacksMongomongoDCR Registry updated Veeva OpenData StreamInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA{env}-internal-veeva-dcr-change-events-inEvents generated by the Veeva component after Veeva DataSteward Action. Flow responsible for events generation is Veeva: generate DCR Change Events (traceVR)output  - callbacksMongomongoDCR Registry updated Reltio StreamInterface NameTypeEndpoint patternDescriptioncallback inputKAFKA{env}-internal-reltio-dcr-change-events-inEvents generated by Reltio after DataSteward Action. Published by the event-publisher component selector: "(exchange.in.headers.reconciliationTarget==null) && exchange.in.headers.eventType in ['full'] && exchange.in.headers.eventSubtype in ['DCR_CREATED', 'DCR_CHANGED', 'DCR_REMOVED']" output  - callbacksMongomongoDCR Registry updated Dependent componentsComponentInterfaceFlowDescriptionAPI RouterAPI routingCreate DCRroute the requests to the DCR-Service componentManagerMDMIntegrationServiceGetEntitiesByUrisRetrieve multiple entities by providing the list of entities URISGetEntityByIdget entity by the idGetEntityByCrosswalkget entity by the crosswalkCreateDCRcreate change requests in ReltioOK DCR ServiceOneKeyIntegrationServiceCreateDCRcreate VR in OneKeyVeeva DCR ServiceThirdPartyIntegrationServiceCreateDCRcreate VR in VeevaAt the moment only Veeva realized this interface, however in the future OneKey will be exposed via this interface as well  Hub StoreMongo connectionN/AStore cache data in mongo collectionTransaction LoggerTransactionServiceTransactionsSaves each DCR status change in transactionsConfigurationConfig ParameterDefault valueDescriptionkafka.groupId${env}_dcr2The application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"kafkaOther.num.stream.threads10Number of threads used in the Kafka StreamkafkaOther.default.deserialization.exception.handlercom.COMPANY.mdm.common.streams.StructuredLogAndContinueExceptionHandlerDeserialization exception handlerkafkaOther.ssl.engine.factory.classcom.COMPANY.mdm.common.security.CustomTrustStoreSslEngineFactorySSL configkafkaOther.partitioner.classcom.COMPANY.mdm.common.ping.PingPartitionerPing partitioner required in Kafka Streams application with PING servicekafkaOther.max.poll.interval.ms3600000Number of milliseconds to wait max time before next poll of eventskafkaOther.max.poll.records10Number of records downloaded in one poll from kafkakafkaOther.max.request.size2097152Events message sizedataStewardResponseConfig: reltioResponseStreamConfig: enable: true eventInputTopic: - ${env}-internal-reltio-dcr-change-events-in    sendTo3PartyDecisionTable:      - target: Veeva        decisionProperties:          sourceName: "VEEVA_CROSSWALK"      - target: Veeva        decisionProperties:          countries: ["ID","PK","MY","TH"]      - target: OneKey    sendTo3PartyTopics:      Veeva:        - ${env}-internal-sendtothirdparty-ds-requests-in      OneKey:        - ${env}-internal-onekeyvr-ds-requests-in VeevaResponseStreamConfig: enable: true eventInputTopic: - ${env}-internal-veeva-dcr-change-events-in  onekeyResponseStreamConfig: enable: true eventInputTopic: - ${env}-internal-onekey-dcr-change-events-in maxRetryCounter: 20 deduplication: duration: 2m gracePeriod: 0s byteLimit: 2147483648 suppressName: dcr2-onekey-response-stream-suppress name: dcr2-onekey-response-stream-with-delay storeName: dcr2-onekey-response-window-deduplication-store pingInterval: 1m- ${env}-internal-reltio-dcr-change-events-in- ${env}-internal-onekey-dcr-change-events-in- ${env}-internal-veeva-dcr-change-events-in- ${env}-internal-sendtothirdparty-ds-requests-in- ${env}-internal-onekeyvr-ds-requests-inConfiguration related to the event processing from Reltio, Onekey or VeevaDeduplication is related to Onekey and allows to configure the aggregation window for events (processing daily) - 24hMaxRetryCounter should be set to a high number - 1000000targetDecisionTable: - target: Reltio decisionProperties: userName: "mdm_dcr2_test_reltio_user" - target: OneKey decisionProperties: userName: "mdm_dcr2_test_onekey_user" - target: Veeva    decisionProperties:      sourceName: "VEEVA_CROSSWALK" - target: Veeva    decisionProperties:      countries: ["ID","PK","MY","TH"] - target: Reltio decisionProperties: country: GBLIST OF the following combination of attributesEach attribute in the configuration is optional. The decision table is making the validation based on the input request and the main object- the main object is HCP, if the HCP is empty then the decision table is checking HCO. The result of the decision table is the TargetType, the routing to the Reltio MDM system, OneKey or Veeva service. userName the user name that executes the requestsourceNamethe source name of the Main objectcountrythe county defined in the requestoperationTypethe operation type for the Main object{ insert, update, delete }affectedAttributesthe list of attributes that the user is changingaffectedObjects{ HCP, HCO, HCP_HCO}RESULT →  TargetType {Reltio, OneKey, Veeva}PreCloseConfig: acceptCountries: - "IN" - "SA"   rejectCountries: - "PL" - "GB"DCRs with countries which belong to acceptCountries attribute are automatically accepted (PRE_APPROVED) or rejected (PRE_REJECTED) when belong to rejectCountires. acceptCountriesList of values, example: [ IN, GB, PL , ...]rejectCountriesList of values, example: [ IN, GB, PL ]transactionLogger: simpleDCRLog: enable: true kafkaEfk: enable: trueTransaction ServiceThe configuration that enables/disables the transaction loggeroneKeyClient: url: http://devmdmsrv_onekey-dcr-service_1:8092 userName: dcr_service_2_userOneKey Integration ServiceThe configuration that allows connecting to onekey dcr serviceVeevaClient: url: http://localhost:8093 username: user apiKey: ""Veeva Integration Service The configuration that allows connecting to Veeva dcr servicemanager: url: https://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/${env}/gw userName:dcr_service_2_user logMessages: true timeoutMs: 120000MDM Integration ServiceThe configuration that allows connecting to Reltio serviceIndexesDCR Service 2 Indexes"
},
{
"title": "DCR service connect guide",
"pageID": "415221200",
"pageLink": "/display/GMDM/DCR+service+connect+guide",
"content": "IntroductionThis guide provides comprehensive instructions on integrating new client applications with the DCR (Data Change Request) service in the MDM HUB system. It is intended for technical engineers, client architects, solution designers, and MDM/Mulesoft teams.Table of ContentsOverviewThe DCR service processes Data Change Requests (DCRs) sent by clients through a REST API. These DCRs are routed to target systems such as OneKey, Veeva Opendata, or Reltio. The service also includes Kafka-streams functionality to process DCR updates asynchronously and update the DCRRegistry cache.Access to the DCR API should be confirmed in advance with the P.O. MDM HUB → A.J. VarganinGetting StartedPrerequisitesAPI credentials (username and password)Network configurations (DNS, VPN, updated whitelists to allow you access API endpoints)Setup InstructionsCreate MDM HUB User: Follow the SOP to add a direct API user to the HUB.  Complete the steps outlined in → Add Direct API User to HUBObtain Access Token: Use PingFederate to acquire an access tokenAPI OverviewEndpointsCreate DCR: POST /dcrGet DCR Status: GET /dcr/statusGet Multiple DCR Statuses: GET /dcr/_statusGet Entity Details: GET /{objectUri}MethodsGET: Retrieve informationPOST: Create new DCRsAuthentication and AuthorizationFirst step is to acquire access token. If you are connecting first time to MDM HUB API you should create MDM HUB user Once you have the PingFederate username and password, you can acquire the access token.Obtaining Access TokenRequest Token:\ncurl --location --request POST 'https://devfederate.COMPANY.com/as/token.oauth2?grant_type=client_credentials' \\ // Use devfederate for DEV & UAT, stgfederate for STAGE, prodfederate for PROD\n--header 'Content-Type: application/x-www-form-urlencoded' \\\n--header 'Authorization: Basic Base64-encoded(username:password)'\n\nResponse:\n{\n "access_token": "12341SPRtjWQzaq6kgK7hXkMVcTzX", \n "token_type": "Bearer",\n "expires_in": 1799 // The token expires after the time - "expires_in" field. Once the token expires, it must be refreshed.\n}\nBelow you can see, how Postman should be configured to obtain access_tokenUsing Access TokenInclude the access token in the Authorization header for all API requests.Network ConfigurationRequired SettingsDNS: Ensure DNS resolution for MDM HUB endpointsVPN: Configure VPN access if requiredWhitelists: Add necessary IP addresses to the whitelistCreating DCRsThis method is used to create new DCR objects in the MDM HUB system. Below is an example request to create a new HCP object in the MDM system.More examples and the entire data model can be found at:DCR service swaggerExample RequestCreate new HCP\ncurl --location '{api_url}/dcr' \\ // e.g., https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-amer-dev\n--header 'Content-Type: application/json' \\\n--header 'Authorization: Bearer ${access_token_value}' \\ // e.g., 0001WvxKA16VWwlufC2dslSILdbE\n--data-raw '[\n {\n "country": "${dcr_country}", // e.g., CA\n        "createdBy": "${created_by}", // e.g., Test user\n        "extDCRComment": "${external_system_comment}", // e.g., This is test DCR to create new HCP\n        "extDCRRequestId": "${external_system_request_id}", // e.g., CA-VR-00255752\n        "dcrType": "${dcr_type}", // e.g., PforceRxDCR\n        "entities": [\n {\n "@type": "hcp",\n "action": "insert",\n "updateCrosswalk": {\n "type": "${source_system_name}", // e.g., PFORCERX \n                    "value": "${source_system_value}" // e.g., HCP-CA-VR-00255752 \n                },\n "values": {\n "birthDate": "07-08-2017",\n "birthYear": "2017",\n "firstName": "Maurice",\n "lastName": "Brekke",\n "title": "HCPTIT.1118",\n "middleName": "Karen",\n "subTypeCode": "HCPST.A",\n "addresses": [\n {\n "action": "insert",\n "values": {\n "sourceAddressId": {\n "source": "${source_system_name}", // e.g., PFORCERX\n                                    "id": "${address_source_system_value}"   // e.g., ADR-CA-VR-00255752 \n                                },\n "addressLine1": "08316 McCullough Terrace",\n "addressLine2": "Waynetown",\n "addressLine3": "Designer Books gold parsing",\n "addressType": "AT.OFF",\n "buildingName": "Handmade Cotton Shirt",\n "city": "Singapore",\n "country": "SG",\n "zip": "ZIP 5"\n }\n }\n ] \n }\n }\n ]\n }\n]'\nRequest placeholders:parameter namedescriptionexampleapi_urlAPI router URLhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-amer-devaccess_token_valueAccess token value0001WvxKA16VWwlufC2dslSILdbEdcr_countryMain entity countryCAcreated_byCreated by userTest userexternal_system_commentComment that will be populate to next processing stepsThis is test DCRexternal_system_request_idID for tracking DCR processingCA-VR-00255752dcr_typeProvided by MDM HUB team when user with DCR permission will be createdPforceRxDCRsource_system_nameSource system name. User used to invoke request has to have access to this sourcePFORCERXsource_system_valueID of this object in source systemHCO-CA-VR-00255752address_source_system_valueID of address in source systemADR-CA-VR-00255752Handling ResponsesSuccess ResponseCreate DCR success response\n[\n {\n "requestStatus": "${request_status}", // e.g., REQUEST_ACCEPTED\n        "extDCRRequestId": "${external_system_request_id},   // e.g., CA-VR-00255752\n        "dcrRequestId": "${mdm_hub_dcr_request_id}",   // e.g., 4a480255a4e942e18c6816fa0c89a0d2\n        "targetSystem": "${target_system_name}",   // e.g., Reltio\n        "country": "${dcr_request_country}",   // e.g., CA\n        "dcrStatus": {\n "status": "CREATED",\n "updateDate": "2024-05-07T11:22:10.806Z",\n "dcrid": "${reltio_dcr_status_entity_uri}"   // e.g., entities/0HjtwJO\n        }\n }\n]\nResponse placeholders:parameterdescriptionexampleexternal_system_request_idDCR request id in source systemCA-VR-00255752mdm_hub_dcr_request_idDCR request id in MDM HUB system4a480255a4e942e18c6816fa0c89a0d2target_system_nameDCR target system name, one of values: OneKey, Reltio, VeevaReltiodcr_request_countryDCR request countryCArequest_statusDCR request status, one of values: REQUEST_ACCEPTED, REQUEST_FAILED, REQUEST_REJECTEDREQUEST_ACCEPTEDreltio_dcr_status_entity_uriURI of DCR status entity in Reltio systementities/0HjtwJORejected Response\n[\n {\n "requestStatus": "REQUEST_REJECTED",\n "errorMessage": "DuplicateRequestException -> Request [97aa3b3f-35dc-404c-9d4a-edfaf9e7121211c] has already been processed",\n "errorCode": "DUPLICATE_REQUEST",\n "extDCRRequestId": "97aa3b3f-35dc-404c-9d4a-edfaf9e7121211c"\n }\n]\nFailed Response\n[\n {\n "requestStatus": "REQUEST_FAILED",\n "errorMessage": "Target lookup code not found for attribute: HCPTitle, country: SG, source value: HCPTIT.111218.",\n "errorCode": "VALIDATION_ERROR",\n "extDCRRequestId": "97aa3b3f-35dc-404c-9d4a-edfaf9e712121121c"\n }\n]\nIn case of incorrect user configuration in the system, the API will return errors as follows. In these cases, please contact the MDM HUB team.Getting DCR statusProcessing of DCR will take some time. DCR status can be track via get DCR status API calls. DCR processing ends when it reaches the final status: ACCEPTED or REJECTED. When the DCR gets the ACCEPTED status, the following fields will appear in its status: "objectUri" and "COMPANYCustomerId". These can be used to find created/modified entities in the MDM system. Full documentation can be found at → Get DCR status.Example RequestBelow is an example query for the selected external_system_request_id\ncurl --location '{api_url}/dcr/_status/${external_system_request_id}' \\ // e.g., CA-VR-00255752 \n--header 'Authorization: Bearer ${access_token_value}' // e.g., 0001WvxKA16VWwlufC2dslSILdbE \nHandling ResponsesSuccess Response\n{\n "requestStatus": "REQUEST_ACCEPTED",\n "extDCRRequestId": "8600ca9a-c317-45d0-97f6-152f01d70158",\n "dcrRequestId": "a2848f2a573344248f78bff8dc54871a",\n "targetSystem": "Reltio",\n "country": "AU",\n "dcrStatus": {\n "status": "ACCEPTED",\n "objectUri": "entities/0Hhskyx", // \n "COMPANYCustomerId": "03-102837896", // usually HCP. HCO only when creating or updating HCO without references to HCP in DCR request\n        "updateDate": "2024-05-07T11:47:08.958Z",\n "changeRequestUri": "changeRequests/0N38Jq0",\n "dcrid": "entities/0EUulla"\n }\n}\nRejected Response\n{\n "requestStatus": "REQUEST_REJECTED",\n "errorMessage": "Received DCR_CHANGED event, updatedBy: svc-pfe-mdmhub, on 1714378259964. Updating DCR status to: REJECTED",\n "extDCRRequestId": "b9239835-937e-434d-948c-6a282a736c4f",\n "dcrRequestId": "0b4125648b6c4d9cb785856841f7d65d",\n "targetSystem": "Veeva",\n "country": "HK",\n "dcrStatus": {\n "status": "REJECTED",\n "updateDate": "2024-04-29T08:11:06.555Z",\n "comment": "This DCR was REJECTED by the VEEVA Data Steward with the following comment: [A-20022] Veeva Data Steward: Your request has been rejected..",\n "changeRequestUri": "changeRequests/0IojkYP",\n "dcrid": "entities/0qmBUXU"\n }\n}\nGetting multiple DCR statusesMultiple statuses can be selected at once using the DCR status filtering APIExample RequestFilter DCR status\ncurl --location '{api_url}/dcr/_status?updateFrom=2021-10-17T20%3A31%3A31.424Z&updateTo=2023-10-17T20%3A31%3A31.424Z&limit=5&offset=3' \\\n--header 'Authorization: Bearer ${access_token_value}' // e.g., 0001WvxKA16VWwlufC2dslSILdbE \nExample ResponseSuccess Response\n[\n {\n "requestStatus": "REQUEST_ACCEPTED",\n "extDCRRequestId": "8d3eb4f7-7a08-4813-9a90-73caa7537eba",\n "dcrRequestId": "360d152d58d7457ab6a0610b718b6b8b",\n "targetSystem": "OneKey",\n "country": "AU",\n "dcrStatus": {\n "status": "ACCEPTED",\n "objectUri": "entities/05jHpR1",\n "COMPANYCustomerId": "03-102429068",\n "updateDate": "2023-10-13T05:43:02.007Z",\n "comment": "ONEKEY response comment: ONEKEY accepted response - HCP EID assigned\\nONEKEY HCP ID: WUSM03999911",\n "changeRequestUri": "8b32b8544ede4c72b7adfa861b1dc53f",\n "dcrid": "entities/04TxaQB"\n }\n },\n {\n "requestStatus": "REQUEST_ACCEPTED",\n "extDCRRequestId": "b66be6bd-655a-47f8-b78b-684e80166096",\n "dcrRequestId": "becafcb2cd004c1d89ecfc670de1de70",\n "targetSystem": "Reltio",\n "country": "AU",\n "dcrStatus": {\n "status": "ACCEPTED",\n "objectUri": "entities/06SVUCq",\n "COMPANYCustomerId": "03-102429064",\n "updateDate": "2023-10-13T05:35:08.597Z",\n "comment": "26498057 [svc-pfe-mdmhub][1697175298895] -",\n "changeRequestUri": "changeRequests/06sXnXH",\n "dcrid": "entities/08LAHeQ"\n }\n }\n]\nGet entityThis method is used to prepare a DCR request for modifying entities and to validate the created/modified entities in the DCR process. Use the "objectUri" field available after accepting the DCR to query MDM system.Example RequestGet entity request\ncurl --location '{api_url}/${objectUri}' \\ // e.g., https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-amer-dev, entities/05jHpR1\n --header 'Authorization: Bearer ${access_token_value}' // e.g., 0001WvxKA16VWwlufC2dslSILdbE \nExample ResponseSuccess ResponseGet entity response\n{\n "type": "configuration/entityTypes/HCP",\n "uri": "entities/06SVUCq",\n "createdBy": "svc-pfe-mdmhub",\n "createdTime": 1697175293866,\n "updatedBy": "Re-cleansing of null in tenant 2NBAwv1z2AvlkgS background task. (started by test.test@COMPANY.com)",\n "updatedTime": 1713375695895,\n "attributes": {\n "COMPANYGlobalCustomerID": [\n {\n "uri": "entities/06SVUCq/attributes/COMPANYGlobalCustomerID/LoT0xC2",\n "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",\n "value": "03-102429064",\n "ov": true\n }\n ],\n "TypeCode": [\n {\n "uri": "entities/06SVUCq/attributes/TypeCode/LoT0XcU",\n "type": "configuration/entityTypes/HCP/attributes/TypeCode",\n "value": "HCPT.NPRS",\n "ov": true\n }\n ],\n "Addresses": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv",\n "value": {\n "AddressType": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressType/dZqkWUB",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressType",\n "value": "TYS.P",\n "ov": true\n }\n ],\n "COMPANYAddressID": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/COMPANYAddressID/dZqkakR",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/COMPANYAddressID",\n "value": "7001330683",\n "ov": true\n }\n ],\n "AddressLine1": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine1/dZqkf0h",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine1",\n "value": "addressLine1",\n "ov": true\n }\n ],\n "AddressLine2": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine2/dZqkjGx",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine2",\n "value": "addressLine2",\n "ov": true\n }\n ],\n "AddressLine3": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine3/dZqknXD",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine3",\n "value": "addressLine3",\n "ov": true\n }\n ],\n "City": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/City/dZqkrnT",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/City",\n "value": "city",\n "ov": true\n }\n ],\n "Country": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Country/dZqkw3j",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Country",\n "value": "GB",\n "ov": true\n }\n ],\n "Zip5": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Zip5/dZql0Jz",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5",\n "value": "zip5",\n "ov": true\n }\n ],\n "Source": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF",\n "value": {\n "SourceName": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceName/dZql8qV",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceName",\n "value": "PforceRx",\n "ov": true\n }\n ],\n "SourceAddressID": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceAddressID/dZqlD6l",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceAddressID",\n "value": "string",\n "ov": true\n }\n ]\n },\n "ov": true,\n "label": "PforceRx"\n }\n ],\n "VerificationStatus": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatus/dZrp4Jz",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatus",\n "value": "Unverified",\n "ov": true\n }\n ],\n "VerificationStatusDetails": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatusDetails/hLXLd9W",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatusDetails",\n "value": "Address Verification Status is unverified - unable to verify. the output fields will contain the input data.\\nPost-Processed Verification Match Level is 0 - none.\\nPre-Processed Verification Match Level is 0 - none.\\nParsing Status isidentified and parsed - All input data has been able to be identified and placed into components.\\nLexicon Identification Match Level is 0 - none.\\nContext Identification Match Level is 5 - delivery point (postbox or subbuilding).\\nPostcode Status is PostalCodePrimary identified by context - postalcodeprimary identified by context.\\nThe accuracy matchscore, which gives the similarity between the input data and closest reference data match is 100%.",\n "ov": true\n }\n ],\n "AVC": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AVC/hLXLhPm",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AVC",\n "value": "U00-I05-P1-100",\n "ov": true\n }\n ],\n "AddressRank": [\n {\n "uri": "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressRank/gjq5qMj",\n "type": "configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressRank",\n "value": "1",\n "ov": true\n }\n ]\n },\n "ov": true,\n "label": "TYS.P - addressLine1, addressLine2, city, zip5, GB"\n }\n ]\n },\n "crosswalks": [\n {\n "type": "configuration/sources/ReltioCleanser",\n "value": "06SVUCq",\n "uri": "entities/06SVUCq/crosswalks/dZrp03j",\n "reltioLoadDate": 1697175300805,\n "createDate": 1697175303886,\n "updateDate": 1697175303886,\n "attributes": [\n "entities/06SVUCq/attributes/Addresses/dZqkSDv",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AVC/hLXLhPm",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatus/dZrp4Jz",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/VerificationStatusDetails/hLXLd9W"\n ]\n },\n {\n "type": "configuration/sources/Reltio",\n "value": "06SVUCq",\n "uri": "entities/06SVUCq/crosswalks/dZqkNxf",\n "reltioLoadDate": 1697175300805,\n "createDate": 1697175300805,\n "updateDate": 1697175300805,\n "attributes": [\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Country/dZqkw3j",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceName/dZql8qV",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Zip5/dZql0Jz",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine1/dZqkf0h",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceAddressID/dZqlD6l",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/COMPANYAddressID/dZqkakR",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine2/dZqkjGx",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/City/dZqkrnT",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine3/dZqknXD",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressType/dZqkWUB"\n ],\n "singleAttributeUpdateDates": {\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Country/dZqkw3j": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceName/dZql8qV": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Zip5/dZql0Jz": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine1/dZqkf0h": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF/SourceAddressID/dZqlD6l": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/COMPANYAddressID/dZqkakR": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/Source/dZql4aF": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine2/dZqkjGx": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/City/dZqkrnT": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressLine3/dZqknXD": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv": "2023-10-13T05:35:00.805Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressType/dZqkWUB": "2023-10-13T05:35:00.805Z"\n }\n },\n {\n "type": "configuration/sources/HUB_CALLBACK",\n "value": "06SVUCq",\n "uri": "entities/06SVUCq/crosswalks/LoT0kPG",\n "reltioLoadDate": 1697175429294,\n "createDate": 1697175296673,\n "updateDate": 1697175296673,\n "attributes": [\n "entities/06SVUCq/attributes/TypeCode/LoT0XcU",\n "entities/06SVUCq/attributes/COMPANYGlobalCustomerID/LoT0xC2",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressRank/gjq5qMj",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv"\n ],\n "singleAttributeUpdateDates": {\n "entities/06SVUCq/attributes/TypeCode/LoT0XcU": "2023-10-13T05:34:56.673Z",\n "entities/06SVUCq/attributes/COMPANYGlobalCustomerID/LoT0xC2": "2023-10-13T05:37:09.294Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv/AddressRank/gjq5qMj": "2023-10-13T05:35:08.420Z",\n "entities/06SVUCq/attributes/Addresses/dZqkSDv": "2023-10-13T05:35:08.420Z"\n }\n }\n ]\n}\nRejected ResponseEntity not found response\n{\n "code": "404",\n "message": "Entity not found"\n}\nTroubleshooting GuideAll documentation with a detailed description of flows can be found at → PforceRx DCR flowsCommon Issues and SolutionsDuplicate Request:Error Message: "DuplicateRequestException -> Request [ID] has already been processed."Solution: Ensure that the extDCRRequestId is unique for each request.  This ID is used to track DCR processing and prevent duplicate submissions. Generate a new unique ID for every new DCR request.Validation Error:Error Message: "Target lookup code not found for attribute: [Attribute], country: [Country], source value: [Value]."Solution: This error indicates that the provided attribute values or lookup codes are incorrect or not recognized by the system.Verify Attribute Values: Double-check the attribute values in your request against the expected values and formats documented in the API specification (Swagger documentation).Correct Lookup Codes: Ensure that you are using the correct lookup codes for attributes that require them (e.g., country codes, title codes). Example: If you receive "Target lookup code not found for attribute: HCPTitle, country: SG, source value: HCPTIT.111218.", verify that 'HCPTIT.111218' is a valid HCP Title code for Singapore ('SG').Network Errors:Issue: Unable to connect to the DCR API endpoint. Common errors include "Connection refused," "Timeout," "DNS resolution failure."Solutions:Verify Network Connectivity: Use the ping command (e.g., ping api-amer-nprod-gbl-mdm-hub.COMPANY.com) to check if the API endpoint is reachable. Use traceroute to diagnose network path issues.Check VPN Connection: If VPN access is required, ensure that your VPN connection is active and correctly configured.Firewall Settings: Confirm that your firewall rules are not blocking outbound traffic on the necessary ports (typically 443 for HTTPS) to the API endpoint. Contact your network administrator to verify firewall settings if needed.DNS Resolution: Ensure that your DNS server is correctly resolving the MDM HUB API endpoint hostname to an IP address.Authentication Errors:Issue: API requests are rejected due to authentication failures. Common errors include "Invalid credentials," "Token expired," "Unauthorized."Solutions:Verify API Credentials: Double-check that you are using the correct username and password for API access.Access Token Validity: If using Bearer Token authentication, ensure that your access token is valid and not expired. Access tokens typically have a limited lifespan (e.g., 30 minutes).Token Refresh: Implement token refresh logic in your client application to automatically obtain a new access token when the current one expires.Authorization Header: Verify that you are including the access token correctly in the Authorization header of your API requests, using the "Bearer " scheme (e.g., Authorization: Bearer <your_access_token>).Service Unavailable Errors:Issue: Intermittent API connectivity issues or request failures with "503 Service Unavailable" or "500 Internal Server Error" responses.Solutions:Check Service Status: Check if there is a known outage or maintenance activity for the MDM HUB service. A service status page may be available (check with the MDM HUB team).Retry Requests: Implement retry logic in your client application to handle transient service interruptions. Use exponential backoff to avoid overwhelming the API service during recovery.Contact Support: If the issue persists, contact the MDM HUB support team to report the service unavailability and get further assistance.Missing Configuration for UserError Message: "RuntimeException -> User [User] dcrServiceConfig is missing."Missing dcr service cofiguration\n[\n {\n "requestStatus": "REQUEST_FAILED",\n "errorMessage": "RuntimeException -> User test_user dcrServiceConfig is missing",\n "extDCRRequestId": "97aa3b3f-35dc-404c-9d4a-edfaf9e7b11c"\n }\n]\nSolution: Contact the MDM HUB team to ensure the user configuration is correctly set up.Permission Denied to create DCR:Error Message: "User is not permitted to perform: [Action]"Missing role\n{\n "code": "403",\n "message": "User is not permitted to perform: CREATE_DCR"\n}\nSolution: Ensure the user has the necessary permissions to perform the action.Verify User Permissions: Contact the MDM HUB team or your MDM HUB administrator to verify that your user account has the necessary roles and permissions to perform the requested action (e.g., CREATE_DCR, GET_DCR_STATUS) and access the specified DCR type (e.g., PforceRxDCR).DCR Type Access: Ensure that your user configuration includes access to the specific DCR type you are trying to use.Validation Error:Error Message: "ValidationException -> User [User] doesn't have access to PforceRXDCR dcrType."Invalid dcr service configuration\n[\n {\n "requestStatus": "REQUEST_REJECTED",\n "errorMessage": "ValidationException -> User test_user doesn't have access to PforceRXDCR dcrType",\n "errorCode": "VALIDATION_ERROR",\n "extDCRRequestId": "97aa3b3f-35dc-404c-9d4a-edfaf9e71212112121c"\n }\n]\nDescription: This error occurs when the user does not have the necessary permissions to access a specific DCR type (PforceRXDCR) in the MDM HUB system.Possible Causes:The user has not been granted the required permissions for the specified DCR typeThe user configuration is incomplete or incorrectSolution:Verify User Permissions: Ensure that the user has been granted the necessary permissions to access the PforceRXDCR DCR type. This can be done by checking the user roles and permissions in the MDM HUB system"
},
{
"title": "Entity Enricher",
"pageID": "164469912",
"pageLink": "/display/GMDM/Entity+Enricher",
"content": "DescriptionAccepts simple events on the input. Performs the following calls to Reltio:getEntitiesByUrisgetRelationgetChangeRequestfindEntityCountryProduces the events enriched with the targetEntity / targetRelation field retrieved from RELTIO.Technology: java 8, spring boot, mongodb, kafka-streamsCode link: entity-enricher Exposed interfacesInterface NameTypeEndpoint patternDescriptionentity enricher inputKAFKA${env}-internal-reltio-eventsevents being sent by the event publisher component. Event types being considered: HCP_*, HCO_*, ENTITY_MATCHES_CHANGEDentity enricher outputKAFKA${env}-internal-reltio-full-eventsDependent componentsComponentInterfaceFlowDescriptionManagerMDMIntegrationServicegetEntitiesByUrisgetRelationgetChangeRequestfindEntityCountryConfigurationConfig ParameterDefault valueDescriptionbundle.enabletrueenable / disable functionbundle.inputTopics${env}-internal-reltio-eventsinput topicbundle.threadPoolSize10number of thread pool sizebundle.pollDuration10spoll intervalbundle.outputTopic${env}-internal-reltio-full-eventsoutput topickafka.groupId${env}-entity-enricherThe application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, . (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"bundle.kafkaOther.session.timeout.ms30000bundle.kafkaOther.max.poll.records10bundle.kafkaOther.max.poll.interval.ms300000bundle.kafkaOther.auto.offset.resetearliestbundle.kafkaOther.enable.auto.commitfalsebundle.kafkaOther.max.request.size2097152bundle.gateway.apiKey${gateway.apiKey}bundle.gateway.logMessagesfalsebundle.gateway.url${gateway.url}bundle.gateway.userName${gateway.userName}"
},
{
"title": "HUB APP",
"pageID": "302700538",
"pageLink": "/display/GMDM/HUB+APP",
"content": "DescriptionHUB UI is a front-end application that presents basic information about the MDM HUB cluster. This component allows you to manage Kafka and Airflow Dags or view quality service configuration.The app allows users to log in with their COMPANY accounts.Technology: AngularCode link: mdm-hub-appFlowsUser flowsAdmin flowsAccess:Add new role and add users to the UIDependent componentsComponentInterfaceDescriptionMDM ManagerREST APIUsed to fetch quality service configuration and for testing entitiesMDM AdminREST APIUsed to manage kafka, airflow dags and reconciliation serviceConfigurationComponent is configured via environment variablesEnvironment variableDefault valueDescriptionBACKEND_URIN/AMDM Manager URIADMIN_URIN/AMDM Admin URIINGRESS_PREFIXN/AApplication context path"
},
{
"title": "Hub Store",
"pageID": "164469908",
"pageLink": "/display/GMDM/Hub+Store",
"content": "Hub store is a mongo cache where are stored: EntityHistory, EntityMatchesHistory, EntityRelation.ConfigurationConfig ParameterDefault valueDescriptionmongo:host: ***:27017,***:27017,***:27017dbName: reltio_${env}user: ***url: mongodb://${mongo.user}:${mongo.password}@${mongo.host}/${mongo.dbName}Mong DB connection configuration"
},
{
"title": "Inc batch channel",
"pageID": "302686382",
"pageLink": "/display/GMDM/Inc+batch+channel",
"content": "DescriptionResponsible for ETL data loads of data to Reltio. It takes plain data files(eg. txt, csv) and, based on defined mappings, converts it into json objects, which are then sent to Reltio.Code link: inc-batch-channelFlowsIncremantal batch Dependent componentsComponentInterface nameDescriptionManagerKafkaEvents constructed by inc-batch-channel are transferred to the kafka topic, from where they are read by mdm-manager and sent to Reltio. When the event is processed by the Reltio manager send ACK message on the appropriate topic:Example input topic: gbl-prod-internal-async-all-sapExample ACK topic: gbl-prod-internal-async-all-sap-ackBatch ServiceBatch ControllerUsed to store ETL loads state and statistics. All information are placed in mongodbMongoDb collectionsGenBatchDags - stores dag stages stateGenBatchAttributeHisotry - stores state of objects loaded by inc-batch-channelgenBatchLastBatchIds - last batch id for every batchgenBatchProcessorStartTime - start time of all batch stagesgenBatchTagMappings -ConfigurationConnectionsmongoConnectionProps.dbUrlFull Mongo DB URLmongoConnectionProps.mongo.dbNameMongo database namekafka.serversKafka Hostname kafka.groupIdBatch Service component group namekafka.saslMechanismSASL configrrationkafka.securityProtocolSecurity Protocolkafka.sslTruststoreLocationSSL trustore file locationkafka.sslTruststorePasswordSSL trustore file passowrdkafka.usernameKafka usernamekafka.passwordKafka dedicated user passwordkafka.sslEndpointAlgorithm:SSL algorightBatches configuration:batches.${batch_name}Batch configurationbatches.${batch_name}.inputFolderDirectory with input filesbatches.${batch_name}.outputFolderDirectory with output filesbatches.${batch_name}.columnsDefinitionFileFile defining mappingbatches.${batch_name}.requestTopicManager topic with events that are going to be sent to Reltiobatches.${batch_name}.ackTopicAck topicbatches.${batch_name}.parserTypeParser type. Defines separator and encoding formatbatches.${batch_name}.preProcessingDefine preprocessin of input filesbatches.${batch_name}.stages.${stage_name}.stageOrderStage prioritybatches.${batch_name}.stages.${stage_name}.processorTypeProcessor type:SIMPLE - change is applied only in mongoENTITY_SENDER - change is sent to Reltiobatches.${batch_name}.stages.${stage_name}.outputFileNameOutput file namebatches.${batch_name}.stages.${stage_name}.disabledIf stage is disabledbatches.${batch_name}.stages.${stage_name}.definitionsDefine which definition is used to map input filebatches.${batch_name}.stages.${stage_name}.deltaDetectionEnabledIf previous and current state of objects are comparedbatches.${batch_name}.stages.${stage_name}.initDeletedLoadEnabledbatches.${batch_name}.stages.${stage_name}.fullAttributesMergebatches.${batch_name}.stages.${stage_name}.postDeleteProcessorEnabledbatches.${batch_name}.stages.${stage_name}.senderHeadersDefines http headers"
},
{
"title": "Kafka Connect",
"pageID": "164469804",
"pageLink": "/display/GMDM/Kafka+Connect",
"content": "DescriptionKafka Connect is a tool for scalably and reliably streaming data between Apache Kafka® and other data systems.  It makes it simple to quickly define connectors that move large data sets in and out of Kafka. Kafka Connect can ingest entire databases or collect metrics from all your application servers into Kafka topics, making the data available for stream processing with low latency.FlowsSnowflake: Base tables refreshSnowflake: Events publish flowSnowflake: History InactiveSnowflake: LOV data publish flowSnowflake: MT data publish flowConfigurationKafka Connect - properties descriptionparamvaluegroup.id<env>-kafka-connect-snowflaketopic.creation.enablefalseoffset.storage.topic<env>-internal-kafka-connect-snowflake-offset config.storage.topic<env>-internal-kafka-connect-snowflake-config status.storage.topic<env>-internal-kafka-connect-snowflake-statuskey.converterorg.apache.kafka.connect.storage.StringConvertervalue.converterorg.apache.kafka.connect.storage.StringConverterkey.converter.schemas.enabletruevalue.converter.schemas.enabletrueconfig.storage.replication.factor3offset.storage.replication.factor3status.storage.replication.factor3 rest.advertised.host.namelocalhostrest.port8083security.protocolSASL_PLAINTEXT sasl.mechanismSCRAM-SHA-512consumer.group.id<env>-kafka-connect-snowflake-consumerconsumer.security.protocolSASL_PLAINTEXTconsumer.sasl.mechanismSCRAM-SHA-512connectors - SnowflakeSinkConnector - properties descriptionparamvaluesnowflake.topic2table.map<env>-out-full-snowflake-all:HUB_KAFKA_DATAtopics<env>-out-full-snowflake-allbuffer.flush.time300snowflake.url.name<sf_instance_name>snowflake.database.name<db_name>snowflake.schema.nameLANDINGbuffer.count.records1000snowflake.user.name<user_name>value.convertercom.snowflake.kafka.connector.records.SnowflakeJsonConverterkey.converterorg.apache.kafka.connect.storage.StringConverterbuffer.size.bytes60000000snowflake.private.key.passphrase<secret>snowflake.private.key<secret>There is an one exception connected with FLEX environment. The S3SinkConnector is used here - properties descriptionparamvalues3.region<region>s3.part.retries10 s3.bucket.name<s3_bucket>s3.compression.typenone topics.dir<s3_topic_dit>topics<env>-out-full-gblus-flex-allflush.size1000000timezoneUTClocale<locale> format.classio.confluent.connect.s3.format.json.JsonFormatschema.generator.classio.confluent.connect.storage.hive.schema.DefaultSchemaGeneratorschema.compatibilityNONE aws.access.key.id<secret>aws.secret.access.key<secret>value.converterorg.apache.kafka.connect.json.JsonConvertervalue.converter.schemas.enablefalsekey.converterorg.apache.kafka.connect.storage.StringConverterkey.converter.schemas.enablefalsepartition.duration.ms86400000partitioner.classio.confluent.connect.storage.partitioner.TimeBasedPartitioner storage.classio.confluent.connect.s3.storage.S3Storagerotate.schedule.interval.ms86400000rotate.interval.ms-1path.formatYYYY-MM-ddtimestamp.extractorWallclock"
},
{
"title": "Manager",
"pageID": "164469894",
"pageLink": "/display/GMDM/Manager",
"content": "DescriptionManager is the main component taking part in client interactions with MDM systems.It orchestrates API calls with  the following services:Reltio & Nucleus adapters translating client input into MDM API callsProcess logic  - mapping  simple calls into multiple MDM callsQuality engine - validating data flowing into MDMsTransaction engine - logging requests for tracing purposesAutorisation engine - controlling user privileges  Cache engine - reduce API calls by reading data directly from Hub storeManager services are accessible with REST API.  Some services are exposed as asynchronous operations through Kafka for performance reasons.Technology: Java, Spring, Apache CamelCode link: mdm-managerFlowsGet entitySearch entitiesValidate HCPCreate/Update HCP/HCO/MCOLOV readCreate relationsMerge & UnmergeMerge & Unmerge ComplexExposed interfacesInterface NameTypeEndpoint patternDescriptionGet entityREST APIGET /entities/{entityId}Get detailed entity informationGet multiple entitiesREST APIGET /entities/_byUrisReturn multiple entities with provided urisGet entity countryREST APIGET /entities/{entityId}/_countryReturn country for an entity with the provided uriMerge & UnmegeREST APIPOST/entities/{entitiyId/_mergePOST/entities/{entitiyId/_unmerge_byUrisMerge entity A with entity B using Reltio uris as IDs.Unmerge entity B from entity A using Reltio uris as IDs.Merge & Unmege ComplexREST APIPOST/entities/_mergePOST/entities/_unmergeMerge entity A with entity B using request body (JSON) with ids.Unmerge entity B from entity A using request body (JSON) with ids.Create/Update entityREST API & KAFKAPOST /hcpPATCH /hcpPOST /hcoPATCH /hcoCreate/partially update entityCreate/Update multiple entitiesREST APIPOST /batch/hcpPATCH /batch/hcpPOST /batch/hcoPATCH /batch/hcoBatch create HCO/HCP entitiesGet entity by crosswalkREST APIGET /entities/crosswalkGet entity by crosswalkDelete entity by crosswalkREST APIDELETE /entities/crosswalkDelete entityt by crosswalkCreate/Update relationREST APIPOST /relations/_dbscanPATCH /relations/Create/update relationGet relationREST APIGET /relations/{relationId}Get relation by reltio URIGet relation by crosswalkREST APIGET /relations/crosswalkGet relation by crosswalkDelete relation by crosswalkREST APIDELETE /relations/crosswalkDelete relation by crosswalkBatch create relationREST APIPOST /batch/relationBatch create relationCreate/replace/update mco profileREST APIPOST /mcoPATCH /mcoCreate, replace or partially update mco profileCreate/replace/update batch mco profileREST APIPOST /batch/mcoPATCH /batch/mcoCreate, replace or partially update mco profilesUpdate Usage FlagsREST APIPOST /updateUsageFlagsCreate, Update, Remove UsageType UsageFlags of "Addresses' Address field of HCP and HCO entitiesSearch for change requestsREST APIGET /changeRequests/_byEntityCrosswalkSearch for change requests by entity crosswalkGet change request by uriREST APIGET /changeRequests/{uri}Get change request by uriCreate change requestREST APIPOST /changeRequestCreate change request - internalGet change requestREST APIGET /changeRequestGet change request - internalDependent componentsComponentInterfaceDescriptionReltio AdapterInternal Java interfaceUsed to communicate with ReltioNucleus AdapterInternal Java interfaceUsed to communicate with NucleusAuthorization EngineInternal Java interfaceProvide user authorizationMDM Routing EngineInternal Java interfaceProvides routingConfigurationThe configuration is a composition of dependent components configurations and parameters specifived below.Config ParameterDefault valueDescriptionmongo.urlMongo urlmongo.dbNameMongo database namemongoConnectionProps.dbUrlMongo database urlmongoConnectionProps.dbNameMongo database namemongoConnectionProps.userMongo usernamemongoConnectionProps.passwordMongo user passwordmongoConnectionProps.entityCollectionNameEntity collection namemongoConnectionProps.lovCollectionNameLov collection name"
},
{
"title": "Authorization Engine",
"pageID": "164469870",
"pageLink": "/display/GMDM/Authorization+Engine",
"content": "DescriptionAuthorization Engine is responsible for authorizing users executing API operations. All API operations are secured and can be executed only by users that have specific roles. The engine checks if a user has a role allowed access to API operation.FlowsThe Authorization Engine is engaged in all flows exposed by Manager component.Exposed interfacesInterface NameTypeJava class:methodDescriptionAuthorization ServiceJavaAuthorizationService:processCheck user permission to run a specific operation. If the user has granted a role to run this operation method will allow to call it. In other case authorization exception will throwDependent componentsAll of the below operations are exposed by Manager component and details about was described here. Description column of below table has role names which have to be assigned to user permitted to use described operations.ComponentInterfaceDescriptionManagerGET /entities/*GET_ENTITIESGET /relations/*GET_RELATIONGET /changeRequests/*GET_CHANGE_REQUESTSDELETE /entities/crosswalkDELETE /relations/crosswalkDELETE_CROSSWALKPOST /hcpPOST /batch/hcpCREATE_HCPPATCH /hcpPATCH /batch/hcpUPDATE_HCPPOST /hcoPOST /batch/hcoCREATE_HCOPATCH /hcoPATCH /batch/hcoUPDATE_HCOPOST /mcoPOST /batch/mcoCREATE_MCOPATCH /mcoPATCH /batch/mcoUPDATE_MCOPOST /relationsCREATE_RELATIONPATCH /relationsUPDATE_RELATIONPOST /changeRequestCREATE_CHANGE_REQUESTPOST /updateUsageFlagsUSAGE_FLAG_UPDATEPOST /entities/{entityId}/_mergeMERGE_ENTITIESPOST /entities/{entityId}/_unmergeUNMERGE_ENTITIESGET /lookupLOOKUPSConfigurationConfiguration parameterDescriptionusers[].nameUser nameusers[].descriptionDescription of userusers[].defaultClientDefault MDM client that is used in the case when the user doesn't specify countryusers[].rolesList of roles assigned to userusers[].countriesList of countries whose data can be managed by userusers[].sourcesList of sources (crosswalk types) whose can be used during manage data by the user"
},
{
"title": "MDM Routing Engine",
"pageID": "164469900",
"pageLink": "/display/GMDM/MDM+Routing+Engine",
"content": "DescriptionMDM Routing Engine is responsible for making a decision on which MDM system has to be used to process client requests. The call is made based on a decision table that maps MDM system with a  country.In the case of multiple MDM systems for the same market, the decision table contains a user dimension allowing to select MDM system by user name.FlowsThe MDM Routing Engine is engaged in all flows supported by Manager component.Exposed interfacesInterface NameTypeJava class:methodDescriptionMDM Client FactoryJavaMDMClientFactory:getDefaultMDMClientGet default MDM clientJavaMDMClientFactory:getDefaultMDMClient(username)Get default MDM client specified for the userJavaMDMClientFactory:getMDMClient(country)Get MDM client that supports the specified countryJavaMDMClientFactory:getMDMClient(country, user);Get MDM client that  supported specified country and userDependent componentsComponentInterfaceDescriptionReltio AdapterJavaProvides integrations with Reltio MDMNucleus AdapterJavaProvides integration with Nucleus MDMConfigurationConfiguration parameterDescriptionusers[].namename of userusers[].defaultClientdefault mdm client for userclientsDecisionTable.{selector name}.countries[]List of countriesclientsDecisionTable.{selector name}.clients[]Map where the key is username and value is MDM client name that will be used to process data comes from defined countries.Special key "default" defines the default MDM client which will be used in the case when there is no specific client for username.mdmFactoryConfig.{mdm client name}.typeType of MDM client. Only two values are supported: "reltio" or "nucleus".mdmFactoryConfig.{mdm client name}.configMDM client configuration. It is based on adapter type: Reltio or Nucleus"
},
{
"title": "Nucleus Adapter",
"pageID": "164469896",
"pageLink": "/display/GMDM/Nucleus+Adapter",
"content": "DescriptionNucleus-adapter is a component of MDM Hub that is used to communicate with Nucleus. It provides 4 types of operations:get entity,get entities,create/update entity,get relationNucleus 360 is an old COMPANY MDM platform comparing to Reltio. It's used to store and manage data about healthcare professionals(hcp) and healthcare organizations(hco).It uses batch processing so the results of the operation are applied for the golden record after a certain period of time.Nucleus accepts requests with an XML formatted body and also sends responses in the same way.Technology: java 8, nucleusCode link: nucleus-adapterFlowsCreate/update entityGet entityGet entitiesGet relationsExposed interfacesInterface NameTypeJava class:methodDescriptionget entityJavaNucleusMDMClient:getEntityProvides a mechanism to obtain information about the specified entity. Entity can be obtained by entity id, e.g. xyzf325Two Nucleuses methods are used to obtain detailed information about the entity.First is Look up method, thanks to which we can obtain basic information about entity(xml format) by its id.Next, we provide that information for the second Nucleus method, Get Profile Details that sends a response with all available information (xml format).Finally, we gather all received information about the entity, convert it to Relto model(json format) and transfer it to a client.get entitiesJavaNucleusMDMClient:getEntitiesProvide a mechanism to obtain basic information about a group of entities. This entity group is determined based on the defined filters(e.g. first name, last name, professional type code).For this purpose only Nuclueus look up method is used. This way we receive only basic information about entities but it is performance-optimized and does not create unnecessary load on the server.create/update entityJavaNucleusMDMClient:creteEntityUsing the Nucleus Add Update web service method nucleus-adapter provides a mechanism to create or update data present in the database according to the business rules(createEntity method).Nucleus-adapter accepts JSON formatted requests body, maps it to xml format, and then sends it to Nucleus.get relationsJavaNucleusMDMClient:getRelationTo get relations nucleus-adapter uses the Nucleus affiliation interface.Nucleus produces XML formatted response and nucleus-adapter transforms it to Reltio model(JSON format).Dependent componentsComponentInterfaceDescriptionNucleushttps://{{ nuleus host }}/CustomerManage_COMPANY_EU_Prod/manage.svc?singleWsdlNucleus endpoint for Creating/updating hcp and hcohttps://{{ nuleus host }}/Nuc360ProfileDetails5.0/Api/DetailSearchNucleus endpoint for getting details about entityhttps://{{ nuleus host }}/Nuc360QuickSearch5.0/LookupNucleus endpoint for getting basic information about entityhttps://{{ nuleus host }}/Nuc360DbSearch5.0/api/affiliationNucleus endpoint for getting relations informationConfigurationConfig ParameterDefault valueDescriptionnucleusConfig.baseURLnullBase url of Nucleus mdmnucleusConfig.usernamenullNucleus usernamenucleusConfig.passwordnullNucleus passwordnucleusConfig.additionalOptions.customerManageUrlnullNucleus endpoint for creating/updating entitiesnucleusConfig.additionalOptions.profileDetailsUrlnullNucleus endpoint for getting detailed information about entitynucleusConfig.additionalOptions.quickSearchUrlnullNucleus endpoint for getting basic information about entitynucleusConfig.additionalOptions.affiliationUrlnullNucleus endpoint for getting information about entities relationsnucleusConfig.additionalOptions.defaultIdTypenullDefault IdType for entities search(used if another not provided)"
},
{
"title": "Quality Engine and Rules",
"pageID": "164469944",
"pageLink": "/display/GMDM/Quality+Engine+and+Rules",
"content": "DescriptionQuality engine is used to verify data quality in entity attributes. It is used for MCO, HCO, HCP entities.Quality engine is responsible for preprocessing Entity when a specific precondition is met. This engine is started in the following cases:Rest operation (POST/PATCH) on /hco endpoint on MDM ManagerRest operation (POST/PATCH) on /hcp endpoint on MDM ManagerRest operation (POST/PATCH) on /mco endpoint on MDM ManagerIt has two two components quality-engine and quality-engine-integrationTechnology:fasterxmlCode link:quality-engine - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/quality-enginequality-engine-integration - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/quality-engine-integrationquality rules - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/mdm-manager/src/main/resources/qualityRulesBusiness requirements (provided by AJ)COMPANY Teams → Global Customer MDM → 20-Design → Hub → Global-MDM_DQ_*FlowsValidation by quality rules is done before sending entities to reltio. Quality rules should be enabled in configuration.Data quality checking is started in com.COMPANY.mdm.manager.service.QualityService. Whole rule flow for entity have one context (com.COMPANY.entityprocessingengine.pipeline.RuleContext)RuleRule have following configurationname - name of the rule - it is requiredpreconditions - preconditions that should be met to run the rulecheck - check that should be triggered if preconditions are metaction - action that should be triggered if check is evaluated to truePreconditionsStructure:Example:preconditions:    - type: source      values:          - CENTRISPossible types:not - it evaluates to true if all preconditions that are underneath evaluate to falsematch - it evaluate to true if given attribute value matches any of listed patterns to trueanyMatch - it evaluate to true if given array attribute value matches any of listed patterns to trueexistsInContext - it checks if given fieldName with specified value exists in contextcontext - check if entity context values contains only allowed once source - check if entity has source of given typeChecksStructure:Example:check:   type: match   attribute: FirstName   values:       - '[^0-9@#$%^&*~!"<>?/|\\_]+'Possible types:ageCheck - check if age specified in date or year attribute is older than specified number of yearsmandatoryGroup - check if at least one from specified list of attributes existsmandatory - check if specified attribute existsmandatoryAll - check if all specified attributes existsmandatoryArray - check if specified nested attribute existsnot - check if opposite of the check is truegroupMatch - check of group of attributes matches specified valuesmatch - check if attribute value matches specified given valueempty - empty checkActionsStructure:Example:action:   type: add   attributes:      - DataQuality[].DQDescription   value: "{source}_005_02"Possible types:clean - cleans attribute value - replaces pattern with given stringreject - rejects entityremove - remove attributeset - sets attribut valuemodify - modify attribute valueadd - adds attribute valuechineseNameToEnglish - converts chinese value to englishaddressDigest - calculate address digestaddressCrosswalkValue - sets digest valueconvertCase - convert case lower, upper, capitalizeremoveEmptyAttributes - removes empty attributesprefixByCountry - adds country prefix to attribute valuemakeSourceAddressInfo - adds attribute with source address infopadding - pads attribute value with specified characterassignId - assings id setContextValue - set value that will be stored in contextDependent componentsComponentInterfaceFlowDescriptionmanagerQualityServiceValidationRuns quality engine validationConfigurationConfig ParameterDefault valueDescriptionvalidationOntrueIt turns on or off validation - it needs to specified in application.ymlpartialOverrideValidationOntrueIt turns on or off validation for updateshcpQualityRulesConfigslist of files with quality rules for hcpIt contains a list of files with quality rules for hcphcoQualityRulesConfigslist of files with quality rules for hcoIt contains a list of files with quality rules for hcohcpAffiliatedHCOsQualityRulesConfigslist of files with quality rules for affilitated hcpIt contains a list of files with quality rules for affilitated HCOmcoQualityRulesConfigslist of files with quality rules for mcoIt contains a list of files with quality rules for mco"
},
{
"title": "Reltio Adapter",
"pageID": "164469898",
"pageLink": "/display/GMDM/Reltio+Adapter",
"content": "DescriptionReltio-adapter is a component of MDM Hub(part of mdm-manager) that is used to communicate with Reltio. Technology: Java,Code link: reltio-adapterFlowsCreate/update entityGet entityGet entitiesMerge entityUnmerge entityCreate relationGet relationsCreate DCRGet DCRApply DCRReject DCRDelete DCRExposed interfacesInterface NameTypeEndpoint patternDescriptionGet entityJavaReltioMDMClient:getEntityGet detailed entity information by entity URIGet entitiesJavaReltioMDMClient:getEntitiesGet basic information about a group of entities based on applied filtersCreate/Update entityJavaReltioMDMClient:createEntityCreate/partially update entity(HCO, HCP, MCO)Create/Update multiple entitiesJavaReltioMDMClient:createEntitiesBatch create HCO/HCP/MCO entitiesDelete entityJavaReltioMDMClient:deleteEntityDeletes entity by its URIFind entityJavaReltioMDMClient:findEntityFinds entity. The search mechanism is flexible and chooses the proper method:If URI applied in entityPattern then use the getEntity method.If URI not specified and finds crosswalks then uses getEntityByCrosswalk methodOtherwise, it uses the find matches methodMerge entitiesJavaReltioMDMClient:mergeEntitiesMerge two entities basing on reltio merging rules.Also accepts explicit winner as explicitWinnerEntityUri.Unmerge entitiesJavaReltioMDMClient:unmergeEntitiesUnmerge entitiesUnmerge Entity TreeJavaReltioMDMClient:treeUnmergeEntitiesUnmerge entities recursively(details in reltio treeunmerge documentation)Scan entitiesJavaReltioMDMClient:scanEntitiesIterate entities of a specific type in a particular tenant.Delete crosswalkJavaReltioMDMClient:deleteCrosswalkDeletes crosswalk from an objectFind matchesJavaReltioMDMClient:findMatchesReturns potential matches based on rules in entity type configurationGet entity connectionsJavaReltioMDMClient:getMultipleEntityConnectionsGet connected entitiesGet entity by a crosswalkJavaReltioMDMClient:getEntityByCrosswalkGet entity by the crosswalkDelete relation by a crosswalkJavaReltioMDMClient:deleteRelationDelete relation by relation URIGet relationJavaReltioMDMClient:getRelationGet relation by relation URICreate/Update relationJavaReltioMDMClient:createRelationCreate/update relationScan relationsJavaReltioMDMClient:scanRelationsIterate entities of a specific type in a particular tenant.Get relation by a crosswalkJavaReltioMDMClient:getRelationByCrosswalkGet relation by the crosswalkBatch create relationJavaReltioMDMClient:createRelationsBatch create relationSearch for change requestsJavaReltioMDMClient:searchSearch for change requests by entity crosswalkGet change request by URIJavaReltioMDMClient:getChangeRequestGet change request by URICreate change requestJavaReltioMDMClient:createChangeRequestCreate change request - internalDelete change requestJavaReltioMDMClient:deleteChangeRequestDelete change requestApply change requestJavaReltioMDMClient:applyChangeRequestApply data change requestReject change requestJavaReltioMDMClient:rejectChangeRequestReject data change requestAdd/update external inforJavaReltioMDMClient:createOrUpdateExternalInfoAdd external info to specified DCRDependenciesComponentInterfaceDescriptionReltioGET {TenantURL}/entities/{Entity ID}Get detailed information about the entityhttps://docs.reltio.com/entitiesapi/getentity.htmlGET {TenantURL}/entitiesGet basic( or chosen ) information about entity based on applied filtershttps://docs.reltio.com/mulesoftconnector/getentities_2.htmlGET {TenantURL}/entities/_byCrosswalk/{crosswalkValue}?type={sourceType}Get entity by crosswalkhttps://docs.reltio.com/entitiesapi/getentitybycrosswalk_2.htmlDELETE {TenantURL}/{entity object URI}Delete entityhttps://docs.reltio.com/entitiesapi/deleteentity.htmlPOST {TenantURL}/entitiesCreate/update single or a bunch of entitieshttps://docs.reltio.com/entitiesapi/createentities.htmlPOST {TenantURL}/entities/_dbscanhttps://docs.reltio.com/searchapi/iterateentitiesbytype.html?hl=_dbscanPOST {TenantURL}/entities/{winner}/_sameAs?uri=entities/{looser}Merge entities basing on looser and winner IDhttps://docs.reltio.com/mergeapis/mergingtwoentities.htmlPOST {TenantURL}/<origin id>/_unmerge?contributorURI=<spawn URI>Unmerge entitieshttps://docs.reltio.com/mergeapis/unmergeentitybycontriburi.htmlPOST {TenantURL}/<origin id>/_treeUnmerge?contributorURI=<spawn URI>Tree unmerge entitieshttps://docs.reltio.com/mergeapis/unmergeentitybycontriburi.htmlGET {TenantURL}/relations/Get relation by relation URIhttps://docs.reltio.com/relationsapi/getrelationship.htmlPOST {TenantURL}/relationsCreate relationhttps://docs.reltio.com/relationsapi/createrelationships.htmlPOST {TenantURL}/relations/_dbscanhttps://docs.reltio.com/relationsapi/iteraterelationshipbytype.html?hl=relations%2F_dbscan GET {TenantURL}/changeRequests Get change requesthttps://docs.reltio.com/dcrapi/searchdcr.htmlGET {TenantURL}/changeRequests/{id}Returns a data change request by ID.https://docs.reltio.com/dcrapi/getdatachangereq.htmlPOST {TenantURL}/changeRequests Create data change requesthttps://docs.reltio.com/dcrapi/createnewdatachangerequest.htmlDELETE {TenantURL}/changeRequests/{id} Delete data change requesthttps://docs.reltio.com/dcrapi/deletedatachangereq.htmlPOST {TenantURL}/changeRequests/_byUris/_applyThis API applies (commits) all changes inside a data change request to real entities and relationships.https://docs.reltio.com/dcrapi/applydcr.htmlPOST {TenantURL}/changeRequests/_byUris/_rejectReject data change requesthttps://docs.reltio.com/dcrapi/rejectdcr.htmlPOST {TenantURL}/entities/_matches Returns potential matches based on rules in entity type configuration.https://docs.reltio.com/matchesapi/serachpotentialmatchesforjsonentity.htmlPOST {TenantURL}/_connectionsGet connected entitieshttps://docs.reltio.com/relationsapi/requestdifferententityconnections.html?hl=_connectionsDELETE /{crosswalk URI}Delete crosswalkhttps://docs.reltio.com/mergeapis/dataapicrosswalks.html?hl=delete,crosswalkdataapicrosswalks__deletecrosswalk#dataapicrosswalks__deletecrosswalkPOST {TenantURL}/changeRequests/0000OVV/_externalInfoAdd/update external info to DCRhttps://docs.reltio.com/dcrapi/addexternalinfotochangereq.html?hl=_externalinfoConfigurationConfig ParameterDefault valueDescriptionmdmConfig.authURLnullReltio authentication URLmdmConfig.baseURLnullReltio base URLmdmConfig.rdmUrlnullReltio  RDM URLmdmConfig.usernamenullReltio usernamemdmConfig.passwordnullReltio passwordmdmConfig.apiKeynullReltio apiKeymdmConfig.apiSecretnullReltio apiSecrettranslateCache.milisecondsToExpiretranslateCache.objectsLimit"
},
{
"title": "Map Channel",
"pageID": "302697819",
"pageLink": "/display/GMDM/Map+Channel",
"content": "DescriptionMap Channel integrates GCP and GRV systems data. External systems use the SQS queue or REST API to load data. The data is then copied to the internal queue. This allows to redo the processing at a later time. The identifier and market contained in the data are used to retrieve complete data via REST requests. The data is then sent to the Manager component to storage in the MDM system. Application provides features for filtering events by country, status or permissions. This component uses different mappers to process data for the COMPANY or IQVIA data model.Technology: Java, Spring, Apache CamelCode link: map-channelFlowsGRV & GCP events processingExposed interfacesInterface nameTypeEndpoint patternDescriptioncreate contactREST APIPOST /gcpcreate HCP profile based on GCP contact dataupdate contactREST APIPUT /gcp/{gcpId}update HCP profile based on GCP contact datacreate userREST APIPOST /grvcreate HCP profile based on GRV user dataupdate userREST APIPUT /grv/{grvId}update HCP profile based on GRV user dataDependent componentsComponentInterfaceDescriptionManagerREST APIcreate HCP, create HCO, update HCP, update HCOConfigurationThe configuration is a composition of dependent components configurations and parameters specifived below.Kafka processing configConfig paramDefault valueDescriptionkafkaProducerPropkafka producer propertieskafkaConsumerPropkafka consumer propertiesprocessing.endpointskafka internal topics configurationprocessing.endpoints.[endpoint-type].topickafka entpoint-type topic nameprocessing.endpoints.[endpoint-type].activeOnStartupshould endpoint start on application startupprocessing.endpoints.[endpoint-type].consumerCountkafka endpoint consumer countprocessing.endpoints.[endpoint-type].breakOnFirstErrorshould kafka rebalance on errorprocessing.endpoints.[endpoint-type].autoCommitEnableshould kafka cuto commit enableDEG configConfig paramDefault valueDescriptionDEG.urllDEG gateway URLDEG.oAuth2ServiceDEG authorization service URLDEG.protocolDEG protocolDEG.portDEG portDEG.prefixDEG API prefixTransaction log configConfig paramDefault valueDescriptiontransactionLogger.kafkaEfk.enableshould kafka efk transaction logger enabletransactionLogger.kafkaEfk.kafkaProducer.topickafka efk topic nametransactionLogger.kafkaEfk.logContentOnlyOnFailedLog request body only on failed transactionstransactionLogger.simpleLog.enableshould simple console transaction logger enableFilter configConfig paramDefault valueDescriptionactiveCountries.GRVlist of allowed GRV countriesactiveCountries.GRVlist of allowed GCP countriesdeactivatedStatuses.[Source].[Country]list of ValidationStatus attribute values for which HCP will be deleted for given country and sourcedeactivateGCPContactWhenInactivelst of countries for which GCP will be deleted when contact is inactivedeactivatedWhenNoPermissionslst of countries for which GCP will be deleted when contact permissions are missingdeleteOption.[Source].noneHCP will be sent to MDM when deleted date is presentdeleteOption.[Source].hardcall delete crosswalk action when deleted date is presentdeleteOption.[Source].softcall update HCP when delete date is presentMapper configConfig paramDefault valueDescriptiongcpMappername of GCP mapper implenentationgrvMappername of GRV mapper implenentationMappingsIQVIA mappingCOMPANY mapping"
},
{
"title": "MDM Admin",
"pageID": "284817212",
"pageLink": "/display/GMDM/MDM+Admin",
"content": "DescriptionMDM Admin exposes an API of tools automating repetitive and/or difficult Operating Procedures and Tasks. It also aggregates APIs of various Hub components that should not be exposed to the world, while providing an authorization layer. Permissions to each Admin operation can be granted to client's API user.FlowsKafka OffsetResend EventsPartial ListReconciliationExposed interfacesREST APISwagger: https://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-prod/swagger-ui/index.htmlDependent componentsComponentInterfaceFlowDescriptionReconciliation ServiceReconciliation Service APIEntities ReconciliationAdmin uses internal Reconciliation Service API to trigger reconciliations. Passes the same inputs and returns the same results.Relations ReconciliationPartials ReconciliationPrecallback ServicePrecallback Service APIPartials ListAdmin fetches a list of partials directly from Precallback Service and returns it to the user or uses it to reconcile all entities stuck in partial state.Partials ReconciliationAirflowAirflow APIEvents ResendAdmin allows triggering an Airflow DAG with request parameters/body and checking its status.Events Resend ComplexKafkaKafka Client/Admin APIKafka OffsetsAdmin allows modifying topic/group offsets.ConfigurationConfig ParameterDefault valueDescriptionairflow-config: url: https://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com user: admin password: ${airflow.password} dag: reconciliation_system_amer_dev-Dependent Airflow configuration including external URL, DAG name and credentials. Entities Reload operation will trigger a DAG of configured name in the configured Airflow instance.services:services: reconciliationService: mdmhub-mdm-reconciliation-service-svc:8081 precallbackService: mdmhub-precallback-service-svc:8081URLs of dependent services. Default values lead to internal Kubernetes services."
},
{
"title": "MDM Integration Tests",
"pageID": "302687584",
"pageLink": "/display/GMDM/MDM+Integration+Tests",
"content": "DescriptionThe module contains Integration Tests. All Integration Tests are divided into different categories based on environment on which are executed.Technology:JUnitSpring TestCitrusGradle tasksThe table shows which environment uses which gradle task.EnvironmentGradle taskConfiguration propertiesALLcommonIntegrationTests-GBLUSintegrationTestsForCOMPANYModelRegionUShttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_gblus/group_vars/gw-services/int_tests.ymlCHINAintegrationTestsForCOMPANYModelChinahttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_devchina_apac/group_vars/gw-services/int_tests.ymlEMEAintegrationTestsForCOMPANYModelRegionEMEAhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_emea/group_vars/gw-services/int_tests.ymlAPACintegrationTestsForCOMPANYModelRegionAPAChttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_apac/group_vars/gw-services/int_tests.ymlAMERintegrationTestsForCOMPANYModelRegionAMERhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_amer/group_vars/gw-services/int_tests.ymlOTHERSintegrationTestsForIqviaModelhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/inventory/kube_dev_gbl/group_vars/gw-services/int_tests.ymlThe Jenkins script with configuration: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/jenkins/k8s_int_test.groovyGradle tasks - IT categoriesThe table shows which test categories are included in gradle tasks.Gradle taskTest categorycommonIntegrationTestsCommonIntegrationTestintegrationTestsForCOMPANYModelRegionUSIntegrationTestForCOMPANYModelIntegrationTestForCOMPANYModelRegionUSintegrationTestsForCOMPANYModelChinaIntegrationTestForCOMPANYModelIntegrationTestForCOMPANYModelChinaintegrationTestsForCOMPANYModelIntegrationTestForCOMPANYModel●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●integrationTestsForCOMPANYModelRegionAMERIntegrationTestForCOMPANYModel●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●IntegrationTestForCOMPANYModelRegionAMERintegrationTestsForCOMPANYModelRegionAPACIntegrationTestForCOMPANYModel●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●integrationTestsForCOMPANYModelRegionEMEAIntegrationTestForCOMPANYModel●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●IntegrationTestForCOMPANYModelRegionEMEAintegrationTestsForIqviaModelIntegrationTestForIqiviaModelTests are configured in build.gradle file: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/build.gradle?at=refs%2Fheads%2Fproject%2FboldmoveTest use cases included in categoriesTest categoryTest use casesCommonIntegrationTestCommon Integration TestIntegrationTestForIqiviaModelIntegration Test For Iqvia ModelIntegrationTestForCOMPANYModelIntegration Test For COMPANY ModelIntegrationTestForCOMPANYModelRegionUSIntegration Test For COMPANY Model Region USIntegrationTestForCOMPANYModelChinaIntegration Test For COMPANY Model ChinaIntegrationTestForCOMPANYModelRegionAMERIntegration Test For COMPANY Model Region AMER●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●Integration Test For COMPANY Model DCR2ServiceIntegrationTestsForCOMPANYModelRegionEMEAIntegration Test For COMPANY Model Region EMEA"
},
{
"title": "Nucleus Subscriber",
"pageID": "164469790",
"pageLink": "/display/GMDM/Nucleus+Subscriber",
"content": "DescriptionNucleus subscriber collects events from Amazon AWS S3 modifies it and then transfer to the right Kafka Topic.Data changes are stored as archive files on S3 from where they are then pulled byt the nucleus subscriber.The next step is to modify the event from the Reltio format to one accepted by the MDM Hub. The modified data is then transfered to the appropriate Kafka Topic.Data pulls from S3 are performed periodically so the changes made  are visible after some time.Part of: Streaming channgelTechnology: Java, Spring, Apache CamelCode link: nucleus-subscriberFlowsEntity change events processing (Nucleus) Exposed interfacesInterface NameTypeEndpoint patternDescriptionKafka topic KAFKA{env}-internal-nucleus-eventsEnents pulled from sqs are then transformed and published to kafka topicDependenciesComponentInterfaceFlowDescriptionAWS S3Entity change events processing (Nucleus)Stores events regarding data modification in reltioEntity enricherNucleus Subscriber downstream component. Collects events from Kafka and produces events enriched with the targetEntityConfigurationConfig ParameterDefault valueDescriptionnucleus_subscriber.server.port8082Nucleus subscriber portnucleus_subscriber.kafka.servers10.192.71.136:9094Kafka servernucleus_subscriber.lockingPolicy.zookeeperServernullZookeeper servernucleus_subscriber.lockingPolicy.groupNamenullZookeeper group namenucleus_subscriber.deduplicationCache.maxSize100000nucleus_subscriber.deduplicationCache.expirationTimeSeconds3600nucleus_subscriber.kafka.groupIdhubKafka group Idnucleus_subscriber.kafka.usernamenullKafka usernamenucleus_subscriber.kafka.passwordnullKafka user passwordnucleus_subscriber.publisher.entities.topicdev-internal-integration-testsnucleus_subscriber.publisher.dictioneries.topicdev-internal-reltio-dictionaries-eventsnucleus_subscriber.publisher.relationships.topicdev-internal-integration-testsnucleus_subscriber.mongoConnectionProp.dbUrlnullMongoDB urlnucleus_subscriber.mongoConnectionProp.dbNamenullMongoDB database namenucleus_subscriber.mongoConnectionProp.usernullMongoDB usernucleus_subscriber.mongoConnectionProp.passwordnullMongoDB user passwordnucleus_subscriber.mongoConnectionProp.chechConnectionOnStartupnullCheck connection on startup( yes/no )nucleus_subscriber.poller.typefileSource typenucleus_subscriber.poller.enableOnStartupyesEnable on startup( yes/no )nucleus_subscriber.poller.fileMasknullInput files masknucleus_subscriber.poller.bucketNamecandf-mesosName of S3 bucketnucleus_subscriber.poller.processingTimeoutMs3000000Timeout in milisecondsnucleus_subscriber.poller.inputFolderC:/PROJECTS/COMPANY/GIT/mdm-publishing-hub/nucleus-subscriber/src/test/resources/dataInput directorynucleus_subscriber.poller.outputFoldernullOutput directorynucleus_subscriber.poller.keynullPoller keynucleus_subscriber.poller.secretnullPoller secretnucleus_subscriber.poller.regionEU_WEST_1Poller regionnucleus_subscriber.poller.alloweSubDirsnullAllowed sub directories( e.g. by country code - AU, CA )nucleus_subscriber.fileFormat.hcp.*Professional.expInput fiile format for hcpnucleus_subscriber.fileFormat.hco.*Organization.expInput fiile format for hconucleus_subscriber.fileFormat.dictionary.*Code_Header.expInput fiile format for dictionarynucleus_subscriber.fileFormat.dictionaryItem.*Code_Item.expInput fiile format for dictionary Itemnucleus_subscriber.fileFormat.dictionaryItemDesc.*Code_Item_Description.expInput fiile format fornucleus_subscriber.fileFormat.dictionaryItemExternal.*Code_Item_External.expInput fiile format fornucleus_subscriber.fileFormat.customerMerge.*customer_merge.expInput fiile format for customer mergenucleus_subscriber.fileFormat.specialty.*Specialty.expInput fiile format for specialitynucleus_subscriber.fileFormat.address.*Address.expInput fiile format foraddressnucleus_subscriber.fileFormat.degree.*Degree.expInput fiile format for degreenucleus_subscriber.fileFormat.identifier.*Identifier.expInput fiile format foridentifiernucleus_subscriber.fileFormat.communication.*Communication.expInput fiile format forcommunicationnucleus_subscriber.fileFormat.optout.*Optout.expInput fiile format for optoutnucleus_subscriber.fileFormat.affiliation.*Affiliation.expInput fiile format for affiliationnucleus_subscriber.fileFormat.affiliationRole.*AffiliationRole.expInput fiile format for affiliation role."
},
{
"title": "OK DCR Service",
"pageID": "164469929",
"pageLink": "/display/GMDM/OK+DCR+Service",
"content": "DescriptionValidation of information regarding healthcare institutions and professionals based on ONE KEY webservices databaseTechnology: java 8, spring boot, mongodb, kafka-streamsCode link: mdm-onekey-dcr-service FlowsData Steward ResponseSubmit Validation RequestTrace Validation RequestExposed interfacesInterface NameTypeEndpoint patternDescriptioninternal onekeyvr inputKAFKA${env}-internal-onekeyvr-inevents being sent by the event publisher component. Event types being considered: HCP_*, HCO_*, ENTITY_MATCHES_CHANGEDinternal onekeyvr change requests inputKAFKA${env}-internal-onekeyvr-change-requests-inDependent componentsComponentInterfaceFlowDescriptionManagerGetEntitygetEntitygetting the entity from RELTIOMDMIntegrationServicegetMatchesgetting matches from RELTIOtranslateLookupstranslating lookup codescreateEntityDCR entity created in Reltio and the relation between the processed entity and the DCR entitycreateResponsepatchEntityupdating the entity in RELTIOBoth ONEKEY service and the Manager service are called with the retry policy.ConfigurationConfig ParameterDefault valueDescriptiononekey.oneKeyIntegrationService.url${oneKeyClient.url}onekey.oneKeyIntegrationService.userName${oneKeyClient.userName}onekey.oneKeyIntegrationService.password${oneKeyClient.password}onekey.oneKeyIntegrationService.connectionPoint${oneKeyClient.connectionPoint}onekey.oneKeyIntegrationService.logMessages${oneKeyClient.logMessages}onekey.oneKeyIntegrationService.retrying.maxAttemts22Limit to the number of attempts -> Exponential Back Offonekey.oneKeyIntegrationService.retrying.initialIntervalMs1000Initial interval -> Exponential Back Offonekey.oneKeyIntegrationService.retrying.multiplier2.0Multiplier -> Exponential Back Offonekey.oneKeyIntegrationService.retrying.maxIntervalMs3600000Max interval -> Exponential Back Offonekey.gatewayIntegrationService.url${gateway.url}onekey.gatewayIntegrationService.userName${gateway.userName}onekey.gatewayIntegrationService.apiKey${gateway.apiKey}onekey.gatewayIntegrationService.logMessages${gateway.logMessages}onekey.gatewayIntegrationService.timeoutMs${gateway.timeoutMs}onekey.gatewayIntegrationService.gatewayRetryConfig.maxAttemts22onekey.gatewayIntegrationService.gatewayRetryConfig.initialIntervalMs1000onekey.gatewayIntegrationService.gatewayRetryConfig.multiplier2.0onekey.gatewayIntegrationService.gatewayRetryConfig.maxIntervalMs3600000onekey.gatewayIntegrationService.gatewayRetryConfig.maxAttemts22Limit to the number of attempts -> Exponential Back Offonekey.gatewayIntegrationService.gatewayRetryConfig.initialIntervalMs1000Initial interval -> Exponential Back Offonekey.gatewayIntegrationService.gatewayRetryConfig.multiplier2.0Multiplier -> Exponential Back Offonekey.gatewayIntegrationService.gatewayRetryConfig.maxIntervalMs3600000Max interval -> Exponential Back Offonekey.submitVR.eventInputTopic${env}-internal-onekeyvr-inSubmit Validation input topiconekey.submitVR.skipEventTypeSuffix_REMOVED_INACTIVATED_LOST_MERGESubmit Validation event type string endings to skiponekey.submitVR.storeNamewindow-deduplication-storeInternal kafka topic that stores events to deduplicateonekey.submitVR.window.duration4hThe size of the windows in milliseconds.onekey.submitVR.window.name<no value>Internal kafka topic that stores events being grouped by.onekey.submitVR.window.gracePeriod0The grace period to admit out-of-order events to a window.onekey.submitVR.window.byteLimit107374182Maximum number of bytes the size-constrained suppression buffer will use.onekey.submitVR.window.suppressNamedcr-suppressThe specified name for the suppression node in the topology.onekey.traceVR.enabletrueonekey.traceVR.minusExportDateTimeMillis3600000onekey.traceVR.schedule.cron0 0 * ? * * # every hourquartz.properties.org.quartz.scheduler.instanceNamemdm-onekey-dcr-serviceCan be any string, and the value has no meaning to the scheduler itself - but rather serves as a mechanism for client code to distinguish schedulers when multiple instances are used within the same program. If you are using the clustering features, you must use the same name for every instance in the cluster that is logically the same Scheduler.quartz.properties.org.quartz.scheduler.skipUpdateChecktrueWhether or not to skip running a quick web request to determine if there is an updated version of Quartz available for download. If the check runs, and an update is found, it will be reported as available in Quartzs logs. You can also disable the update check with the system property “org.terracotta.quartz.skipUpdateCheck=true” (which you can set in your system environment or as a -D on the java command line). It is recommended that you disable the update check for production deployments.quartz.properties.org.quartz.scheduler.instanceIdGenerator.classorg.quartz.simpl.HostnameInstanceIdGeneratorOnly used if org.quartz.scheduler.instanceId is set to “AUTO”. Defaults to “org.quartz.simpl.SimpleInstanceIdGenerator”, which generates an instance id based upon host name and time stamp. Other IntanceIdGenerator implementations include SystemPropertyInstanceIdGenerator (which gets the instance id from the system property “org.quartz.scheduler.instanceId”, and HostnameInstanceIdGenerator which uses the local host name (InetAddress.getLocalHost().getHostName()). You can also implement the InstanceIdGenerator interface your self.quartz.properties.org.quartz.jobStore.classcom.novemberain.quartz.mongodb.MongoDBJobStorequartz.properties.org.quartz.jobStore.mongoUri${mongo.url}quartz.properties.org.quartz.jobStore.dbName${mongo.dbName}quartz.properties.org.quartz.jobStore.collectionPrefix quartz-onekey-dcrquartz.properties.org.quartz.scheduler.instanceIdAUTOCan be any string, but must be unique for all schedulers working as if they are the same logical Scheduler within a cluster. You may use the value “AUTO” as the instanceId if you wish the Id to be generated for you. Or the value “SYS_PROP” if you want the value to come from the system property “org.quartz.scheduler.instanceId”.quartz.properties.org.quartz.jobStore.isClusteredtruequartz.properties.org.quartz.threadPool.threadCount1"
},
{
"title": "Publisher",
"pageID": "164469927",
"pageLink": "/display/GMDM/Publisher",
"content": "DescriptionPublisher is member of Streaming channel. It distributes events to target client topics based on configured routing rules.Main tasks:Filtering events beased on their contentRouting events based publisher configurationEnriching nucleus eventsUpdating mongoTechnology: Java, Spring, KafkaCode: event-publisherFlowsReltio events streamingNucleus Events StreamingCallbacksEvent filtering and routing rulesLOV update process (Nucleus)Data Steward ResponseSubmit Validation RequestSnowflake: Events publish flowExposed interfacesInterface NameTypeEndpoint patternDescriptionKafka - input topics for entities dataKAFKA${env_name}-internal-reltio-proc-events${env_name}-internal-nucleus-eventsStores events about entities, relations and change requests changes.Kafka - input topics for dicrtionaries dataKAFKA${env_name}-internal-reltio-dictionaries-events${env_name}-internal-nucleus-dictionaries-eventsStores events about lookup (LOV) changes.Kafka - output topicsKAFKA${env_name}-out-**(All topics that get events from publisher)Output topics for Publisher.Event after filtration process is then transferred on the appropriate topic based on routing rules defined in the configurationResend eventsRESTPOST /resendLastEventAllow triggering reconstruction event. Events are created based on the current state fetch for MongoDB and then forwarded according to defined routing rules.Mongo's collectionsMongo collectionentityHistoryCollection stored last known state of entities dataMongo collectionentityRelationsCollection stored last known state of relations dataMongo collectionLookupValuesCollection stored last known state of lookups (LOVs) dataDependenciesComponentInterfaceFlowDescriptionCallback ServiceKAFKAEntity change events processing (Reltio)Creates input for PublisherResponsible for following transformations:HCO names calculationDangling affiliationsCrosswalk cleanerPrecallback streamMongoDBEntity change events processing (Reltio)Entity change events processing (Nucleus)Stores the last known state of objects such as: entities, relations. Used as cache data to reduce Reltio load. Is updated after every entity change eventKafka Connect Snowflake connectorKAFKASnowflake: Events publish flowReceives events from the publisher and loads it to Snowflake databaseClients of the HUBClients that receive events from MDM HUBMAPP, China, etcConfigurationConfig ParameterDefault valueDescriptionevent_publisher.usersnullPublisher users dictionary used to authenticate user in ResendService operations.User parameters:name,description,roles(list) - currently there is only one role which can be assign to user:RESEND_EVENT - user with this role is granted to use resend last event operationevent_publisher.activeCountries- AD- BL- FR- GF- GP- MC- MF- MQ- MU- NC- PF- PM- RE- WF- YT- CNList of active countriesevent_publisher.lookupValuesPoller.interval60mInterval of lookups (LOVs) from Reltioevent_publisher.lookupValuesPoller.batchSize1000Poller batch sizeevent_publisher.lookupValuesPoller.enableOnStartupyesEnable on startup( yes/no )event_publisher.lookupValuesPoller.dbCollectionNameLookupValuesMongo's collection name stored fetched lookup dataevent_publisher.eventRouter.incomingEventsincomingEvents: reltio: topic: dev-internal-reltio-entity-and-relation-events enableOnStartup: no startupOrder: 10 properties: autoOffsetReset: latest consumersCount: 20 maxPollRecords: 50 pollTimeoutMs: 30000Configuration of the incoming topic with events regarding entities, relations etc.event_publisher.eventRouter.dictionaryEventsdictionaryEvents: reltio: topic: dev-internal-reltio-dictionaries-events enableOnStartup: true startupOrder: 30 properties: autoOffsetReset: earliest consumersCount: 10 maxPollRecords: 5 pollTimeoutMs: 30000Configuration of incoming topic with events regarding dictionary changes.event_publisher.eventRouter.historyCollectionNameentityHistoryName of collection stored entities stateevent_publisher.eventRouter.relationCollectionNameentityRelationsName of collection stored relations stateevent_publisher.eventRouter.routingRules.[]nullList of routing rules. Routing rule definition has following parametersid - unique identifier of rule,selector - conditional expression written in groovy which filters incoming events,destination - topic name."
},
{
"title": "Raw data service",
"pageID": "337869880",
"pageLink": "/display/GMDM/Raw+data+service",
"content": "DescriptionRaw data service is the component used to process source data. Allows you to remove expired data in real time. Provides a REST interface for restoring source data on the environment.Technology:kotlin,kafka streams,spring bootCode link: Raw data serviceFlows Raw data flowsExposed interfacesBatch Controller - manage batch instancesInterface nameTypeEndpoint patternDescriptionRestore entitiesREST APIPOST /restore/entitiesRestore entities for selected parameters: entity types, sources, countries, date from1. Create consumer for entities topic and given offset - date from2. Poll and filter records3. Produce data to bundle input topicRestore relationsREST APIPOST /restore/relationsRestore entities for selected parameters: sources, countries, relation types and date from1. Create consumer for relations topic and given offset - date from2. Poll and filter records3. Produce data to bundle input topicRestore entitiesREST APIPOST /restore/entities/countCount entities for selected parameters: entity types, sources, countries, date fromRestore entitiesREST APIPOST /restore/relations/countCount relations for selected parameters: sources, countries, relation types and date fromConfigurationConfig paramdescriptionkafka.groupIdkafka group idkafkaOtherother kafka consumer/producer propertiesentityTopictopic used to store entity datarelationTopictopic used to store relation datastreamConfig.patchKeyStoreNamestate store name used to store entities patch keysstreamConfig.relationStoreNamestate store name used to store relations patch keysstreamConfig.enabledis raw data stream processor enabledstreamConfig.kafkaOtherraw data processor stream kafka other propertiesrestoreConfig.enabledis restore api enabledrestoreConfig.consumer.pollTimeoutrestore api kafka topic consumer poll timeoutrestoreConfig.consumer.kafkaOtherother kafka consumer propertiesrestoreConfig.producer.outputrestore data producer output topic - manager bundle input topicrestoreConfig.producer.kafkaOtherother kafka producer properties"
},
{
"title": "Reconciliation Service",
"pageID": "164469826",
"pageLink": "/display/GMDM/Reconciliation+Service",
"content": "Reconciliation service is used to consume reconciliation event from reltio and decide is entity or relation should be refreshed in mongo cache. after reconsiliation this service also produce metrics from reconciliation, it counts changes and produce event with all metatdta and statistics about reconciliated entity/relationFlowsReconciliation+HUB-ClientReconciliation metricsConfigurationConfig ParameterDefault valueDescriptionreconciliation: eventInputTopic: eventOutputTopic:reconciliation: eventInputTopic: ${env}-internal-reltio-reconciliation-events eventOutputTopic: ${env}-internal-reltio-eventsConsumes event from eventInputTopic, decide about reconiliation and produce event to eventOutputTopicreconciliation: eventMetricsInputTopic: eventMetricsOutputTopic:metricRules: - name: operationRegexp: pathRegexp: valueRegexp: reconciliation: eventInputTopic: ${env}-internal-reltio-reconciliation-events eventOutputTopic: ${env}-internal-reltio-events eventMetricsInputTopic: ${env}-internal-reltio-reconciliation-metrics-event eventMetricsOutputTopic: ${env}-internal-reconciliation-metrics-efk-transactionsmetricRules: - name: reconciliation.object.missed operationRegexp: "remove" pathRegexp: "" valueRegexp: ".*" - name: reconciliation.object.added operationRegexp: "add" pathRegexp: "" valueRegexp: ".*" - name: reconciliation.lookupcode.error operationRegexp: "add" pathRegexp: "^.*/lookupCode$" valueRegexp: ".*" - name: reconciliation.lookupcode.changed operationRegexp: "replace" pathRegexp: "^.*/lookupCode$" valueRegexp: ".*" - name: reconciliation.value.changed operationRegexp: "add|replace|remove" pathRegexp: "^/attributes/.+$" valueRegexp: ".*" - name: reconciliation.other.reason operationRegexp: ".*" pathRegexp: ".*" valueRegexp: ".*"Consume event from eventMetricsInputTopic, then calculate diff betwent current and previous event, based on diff produce statisctis and metrics. After all produce event with all information to eventMetricsOutputTopic"
},
{
"title": "Reltio Subscriber",
"pageID": "164469916",
"pageLink": "/display/GMDM/Reltio+Subscriber",
"content": "DescriptionReltio subscriber is part of Reltio events streaming flow. It consumes Reltio events from Amazon SQS, filters, maps, and transfers to the Kafka Topic.Part of: Streaming channelTechnology: Java, Spring, Apache CamelCode link: reltio-subscriberFlowsEntity change events processing (Reltio)Exposed interfacesInterface NameTypeEndpoint patternDescriptionKafka topic KAFKA${env}-internal-reltio-eventsEnents pulled from sqs are then transformed and published to kafka topicDependent componentsComponentInterfaceFlowDescriptionSqs - queueEntity change events processing (Reltio)It stores events about entities modification in reltioEntity enricherReltio Subscriber downstream component. Collects events from Kafka and produces events enriched with the target entityConfigurationConfig ParameterDefault valueDescriptionreltio_subscriber.reltio.queuempe-01_FLy4mo0XAh0YEbNReltio queue namereltio_subscriber.reltio.queueOwner930358522410Reltio queue owner numberreltio_subscriber.reltio.concurrentConsumers1Max number of concurrent consumersreltio_subscriber.reltio.messagesPerPoll10Messages per pollreltio_subscriber.publisher.topicdev-internal-reltio-eventsPublisher kafka topicreltio_subscriber.publisher.enableOnStartupyesEnable on startupreltio_subscriber.publisher.filterSelfMergesnoFilter self merges( yes/no )reltio_subscriber.relationshipPublisher.topicdev-internal-reltio-relations-eventsRelationship publisher kafka topicreltio_subscriber.dcrPublisher.topicnullDCR publisher kafka topicreltio_subscriber.kafka.servers10.192.71.136:9094Kafka serversreltio_subscriber.kafka.groupIdhubKafka group Idreltio_subscriber.kafka.saslMechanismPLAINKafka sasl mechanismreltio_subscriber.kafka.securityProtocolSASL_SSLKafka security protocolreltio_subscriber.kafka.sslTruststoreLocationsrc/test/resources/client.truststore.jksKafka truststore locationreltio_subscriber.kafka.sslTuststorePasswordkafka123Kafka truststore passwordreltio_subscriber.kafka.usernamenullKafka usernamereltio_subscriber.kafka.passwordnullKafka user passwordreltio_subscriber.kafka.compressionCodecnullKafka compression codecreltio_subscriber.poller.types3Source typereltio_subscriber.poller.enableOnStartupnoEnable on startup( yes/no )reltio_subscriber.poller.fileMask.*Input files maskreltio_subscriber.poller.bucketNamecandf-mesosName of S3 bucketreltio_subscriber.poller.processingTimeoutMs7200000Timeout in milisecondsreltio_subscriber.poller.inputFoldernullInput directoryreltio_subscriber.poller.outputFoldernullOutput directoryreltio_subscriber.poller.keynullPoller keyreltio_subscriber.poller.secretnullPoller secretreltio_subscriber.poller.regionEU_WEST_1Poller regionreltio_subscriber.allowedEventTypes- ENTITY_CREATED- ENTITY_REMOVED- ENTITY_CHANGED- ENTITY_LOST_MERGE- ENTITIES_MERGED- ENTITIES_SPLITTED- RELATIONSHIP_CREATED- RELATIONSHIP_CHANGED- RELATIONSHIP_REMOVED- RELATIONSHIP_MERGED- RELATION_LOST_MERGE- CHANGE_REQUEST_CHANGED- CHANGE_REQUEST_CREATED- CHANGE_REQUEST_REMOVED- ENTITIES_MATCHES_CHANGEDEvent types that are processed when received.Other event types are being rejectedreltio_subscriber.transactionLogger.kafkaEfk.enablenullTransaction logger enabled( true/false)reltio_subscriber.transactionLogger.kafkaEfk.logContentOnlyOnFailednullLog content only on failed( true/false)reltio_subscriber.transactionLogger.kafkaEfk.kafkaConsumerProp.groupIdnullKafka consumer group Idreltio_subscriber.transactionLogger.kafkaEfk.kafkaConsumerProp.autoOffsetResetnullKafka transaction logger topicreltio_subscriber.transactionLogger.kafkaEfk.kafkaConsumerProp.consumerCountnullreltio_subscriber.transactionLogger.kafkaEfk.kafkaConsumerProp.sessionTimeoutMsnullSession timeoutreltio_subscriber.transactionLogger.kafkaEfk.kafkaConsumerProp.maxPollRecordsnullreltio_subscriber.transactionLogger.kafkaEfk.kafkaConsumerProp.breakOnFirstErrornullreltio_subscriber.transactionLogger.kafkaEfk.kafkaConsumerProp.consumerRequestTimeoutMsnullreltio_subscriber.transactionLogger.SimpleLog.enablenull"
},
{
"title": "Clients",
"pageID": "164470170",
"pageLink": "/display/GMDM/Clients",
"content": "The section describes clients (systems) that publish or subscribe data to MDM systems vis MDH HUBActive clients\n\n \n \n \n \n \n \n\n \n \n \n \n\n \n \n\n \n \n \n\n \n \n \n \n \n \n\n \n \n \n \nAggregated Contact ListCOMPANY MDM TeamNameContactAndrew J. VarganinAndrew.J.Varganin@COMPANY.comSowjanya Tirumalasowjanya.tirumala@COMPANY.comJohn AustinJohn.Austin@COMPANY.comTrivedi NishithNishith.Trivedi@COMPANY.comGLOBALClientContactsMAPDL-BT-Production-Engineering@COMPANY.comKOLDL-SFA-INF_Support_PforceOL@COMPANY.comSolanki, Hardik (US - Mumbai) <hsolanki@COMPANY.com>;Yagnamurthy, Maanasa (US - Hyderabad) <myagnamurthy@COMPANY.com>;ChinaMing Ming <MingMing.Xu@COMPANY.com>;Jiang, Dawei <Dawei.Jiang@COMPANY.com>MAPPShashi.Banda@COMPANY.comRajesh.K.Chengalpathy@COMPANY.comDebbie.Gelfand@COMPANY.comDinesh.Vs@COMPANY.comDL-MAPP-Navigator-Hypercare-Support@COMPANY.comJapan DWHDL-GDM-ServiceOps-Commercial_APAC@COMPANY.comGRACEDL-AIS-Mule-Integration-Support@COMPANY.comEngageDL-BTAMS-ENGAGE-PLUS@COMPANY.com;Amish.Adhvaryu@COMPANY.comPTRSSagar.Bodala@COMPANY.comOneMedMarsha.Wirtel@COMPANY.com;AnveshVedula.Chalapati@COMPANY.comMedicDL-F&BO-MEDIC@COMPANY.comGBL USClientContactsCDWNarayanan, Abhilash <Abhilash.KadampanalNarayanan@COMPANY.com>Raman, Krishnan <Krishnan.Raman@COMPANY.com>ETLNayan, Rajeev <Rajeev.Nayan3@COMPANY.com>Duvvuri, Satya <Satya.Duvvuri@COMPANY.com>KOLTikyani, Devesh <Devesh.Tikyani@COMPANY.com>Brahma, Bagmita <Bagmita.Brahma2@COMPANY.com>Solanki, Hardik <Hardik.Solanki@COMPANY.com>US Trade (FLEX COV)ClientContactsMain contactsDube, Santosh R <santosh.dube@COMPANY.com>Manseau, Melissa <Melissa.Manseau@COMPANY.com>Thirumurthy, Bala Subramanyam <BalaSubramanyam.Thirumurthy@COMPANY.com>Business TeamMax, Deanna <Deanna.Max@COMPANY.com>Faddah, Laura Jordan <Laura.Faddah@COMPANY.com>GIS(file transfer)Mandala, Venkata <venkata.mandala@COMPANY.com>Srivastava, Jayant <Jayant.Srivastava@COMPANY.com>"
},
{
"title": "KOL",
"pageID": "164470183",
"pageLink": "/display/GMDM/KOL",
"content": "\nData pushing\n Figure 22. KOL authentication with Identity ManagerKOL system push data to MDM integration service using REST API. To authenticate, KOL uses external Oauth2 authorization service named Identity Manager to fetch access token. Then system sends the REST request to integration service endpoint which validates access token using Identity Manager API.\n\nKOL manage data for several countries. Many of these is loaded to default MDM system (Reltio), supported by integration service but for GB, PT, DK and CA countries data is sent to Nucleus 360. Decision, where the data should be loaded, is made by MDM Manager logic. Based on Country attribute value, MDM manager selects the right MDM adapter. It is important to set the Country attribute value correctly during data updating. Same rule applies to the country query parameter during data fetching. Thanks to this, MDM manager is able to process the right data in the right MDM system. In case of updating data with the Country attribute set incorrectly, the REST request will be rejected. When data is being fetched without country attribute query parameter set, the default MDM (Reltio) will be used to resolve the data.\n\nEvent processing\nKOL application receives events in one standard way kafka topic. Events from Reltio MDM system are published to this topic directly after Reltio has processed changes, sent event to SQS and processed them by Event Publisher. It means that the Reltio processes change and send events in real time. Client, who listens for events, does not have to wait for receiving them too long.\n Figure 23. Difference between processing events in Reltio and Nucleus 360The situation changes when the entity changes are processed by Nucleus 360. This MDM publishes changes once in a while, so the events will be delivered to kafka topic with longer delay."
},
{
"title": "Japan DWH",
"pageID": "164470060",
"pageLink": "/display/GMDM/Japan+DWH",
"content": "ContactsJapan DWH Feed Support DL: DL-GDM-ServiceOps-Commercial_APAC@COMPANY.com - it is valid until 15/04/2023DL-ATP-SERVICEOPS-JPN-DATALAKE@COMPANY.com - it will be valid since 15/04/2023 FlowsJapan DWH has only one batch process which consume the incremental file export from data warehouse, process this and loads data to MDM. This process is based on incremental batch engine and run on Airflow platform.Input filesThe input files are delivered by GIS to AWS S3 bucket.UATPRODS3 service accountdidn't createdsvc_gbi-cc_mdm_japan_rw_s3S3 Access key IDdidn't createdAKIATCTZXPPJU6VBUUKBS3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectS3 Foldermdm/UAT/inbound/JAPAN/mdm/inbound/JAPAN/Input data file mask JPDWH_[0-9]+.zipJPDWH_[0-9]+.zipCompressionZipZipFormatFlat files, DWH dedicated format Flat files, DWH dedicated format ExampleJPDWH_20200421202224.zipJPDWH_20200421202224.zipSchedulenoneAt 08:00 UTC on every day-of-week from Monday through Friday (0 8 * * 1-5). The input file is not delivered in Japan's holidays (https://www.officeholidays.com/countries/japan/2020)Airflow jobinc_batch_jp_stageinc_batch_jp_prodData mapping The detailed filed mappings are presented in the document.Mapping rules:Inactive HCPs, HCOs are not loaded in IDL.   They are filtered out using delete flags present in source files.  Profiles being inactivated in DWH source are soft-deleted from Reltio. Affiliations between hospitals and departments are not delivered by the source directly. They are derived from dri file (doctor  institution association) having department ids referring to a dictionary on affiliations.  Each hospital in Reltio has dedicated departments objects although departments are global dictionary in Japan DWH. HCP addresses are copied from affiliated HCOs. HCP workplaces refer to departments. Departments point to Main HCOs using MainHCO relations.  HCP affiliations pointing to inactive HCOs are skipped during the load, but HCP profiles are load. Department names  and hospital names are added to address attributes (HcoName, MainHcoName) associated with HCPs to allow searching by its names.ConfigurationFlow configuration is stored in MDM Environment configuration repository. For each environment where the flow should be enabled the configuration file inc_batch_jp.yml has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name "inc_batch_jp" has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table prresents the location of inc_batch_jp.yml file for UAT and PROD env:UATPRODinc_batch_jp.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/stage/group_vars/gw-airflow-services/inc_batch_jp.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/prod/group_vars/gw-airflow-services/inc_batch_jp.ymlApplying configuration changes is done by executing the deploy Airflow's components procedure.SOPsThere is no particular SOP procedure for this flow. All common SOPs was described in the "Airflow:" chapter."
},
{
"title": "Nucleus",
"pageID": "164470256",
"pageLink": "/display/GMDM/Nucleus",
"content": "ContactsDelivering of data used by Nucleus's processes is maintained by Iqvia Team: COMPANY-MDM-Support@iqvia.comFlowsThere are several batch processes that loads data extracted from Nucleus to Reltio MDM. Data are delivered for countries: Canada, South Korea, Australia, United Kingdom, Portugal and Denmark as zip archive available at S3 bucket.Input filesUATPRODS3 service accountdidn't createdsvc_mdm_project_nuc360_rw-s3S3 Access key IDdidn't createdAKIATCTZXPPJTFMGRZFMS3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectS3 Foldermdm/UAT/inbound/APAC_CCV/AU/mdm/UAT/inbound/APAC_CCV/KR/mdm/UAT/inbound/nuc360/inc-batch/GB/mdm/UAT/inbound/nuc360/inc-batch/PT/mdm/UAT/inbound/nuc360/inc-batch/DK/mdm/UAT/inbound/nuc360/inc-batch/CA/mdm/inbound/nuc360/inc-batch/AU/mdm/inbound/nuc360/inc-batch/KR/mdm/inbound/nuc360/inc-batch/GB/mdm/inbound/nuc360/inc-batch/PT/mdm/inbound/nuc360/inc-batch/DK/mdm/inbound/nuc360/inc-batch/CA/Input data file mask NUCLEUS_CCV_[0-9_]+.zipNUCLEUS_CCV_[0-9_]+.zipCompressionZipZipFormatFlat files in CCV format Flat files in CCV format ExampleNUCLEUS_CCV_8000000792_20200609_211102.zipNUCLEUS_CCV_8000000792_20200609_211102.zipSchedulenoneinc_batch_apac_ccv_au_prod - at 17:00 UTC on every day-of-week from Monday through Friday (0 17 * * 1-5)inc_batch_apac_ccv_kr_prod - at 08:00 UTC on every day-of-week from Monday through Friday (0 8 * * 1-5)inc_batch_eu_ccv_gb_stage - at 07:00 UTC on every day-of-week from Monday through Friday (0 7 * * 1-5)inc_batch_eu_ccv_pt_stage - at 07:00 UTC on every day-of-week from Monday through Friday (0 7 * * 1-5)inc_batch_eu_ccv_dk_stage - at 07:00 UTC on every day-of-week from Monday through Friday (0 7 * * 1-5)inc_batch_amer_ccv_ca_prod - at 17:00 UTC on every day-of-week from Monday through Friday (0 17 * * 1-5)Airflow's DAGSinc_batch_apac_ccv_au_stageinc_batch_apac_ccv_kr_stageinc_batch_eu_ccv_gb_stageinc_batch_eu_ccv_pt_stageinc_batch_eu_ccv_dk_stageinc_batch_amer_ccv_ca_stageinc_batch_apac_ccv_au_prodinc_batch_apac_ccv_kr_prodinc_batch_eu_ccv_gb_stageinc_batch_eu_ccv_pt_stageinc_batch_eu_ccv_dk_stageinc_batch_amer_ccv_ca_prodData mappingData mapping is described in the following document.ConfigurationFlows configuration is stored in MDM Environment configuration repository. For each environment where the flows should be enabled configuration files has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table presents the location of flows configuration files for UAT and PROD env:Flow configuration fileUATPRODinc_batch_apac_ccv_au.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_ccv_au.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_ccv_au.ymlinc_batch_apac_ccv_kr.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_ccv_kr.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_ccv_kr.ymlinc_batch_eu_ccv_gb.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_ccv_gb.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_ccv_gb.ymlinc_batch_eu_ccv_pt.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_ccv_pt.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_ccv_pt.ymlinc_batch_eu_ccv_dk.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_ccv_dk.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_ccv_dk.ymlinc_batch_amer_ccv_ca.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_amer_ccv_ca.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_amer_ccv_ca.ymlTo deploy changes of DAG's configuration you have to execute SOP Deploying DAGsSOPsThere is no particular SOP procedure for this flow. All common SOPs was described in the "Airflow:" chapter."
},
{
"title": "Veeva New Zealand",
"pageID": "164470112",
"pageLink": "/display/GMDM/Veeva+New+Zealand",
"content": "ContactsDL-ATP-APC-APACODS-SUPPORT@COMPANY.comFlowThe flow transforms the Veeva's data to Reltio model and loads the result to MDM. Data contains HCPs and HCOs from New Zealand.This flow is divided into two steps:Pre-proccessing - Copying source files from Veeva's S3 bucket, filtering once and uploading result to HUB's bucket,Incremental batch - Running the standard incremental batch process.Each of these steps are realized by separated Airflow's DAGs.Input filesUATPRODVeeva's S3 service accountSRVC-MDMHUB_GBL_NONPRODSRVC-MDMHUB_GBLVeeva's S3 Access key IDAKIAYCS3RWHN72AQKG6BAKIAYZQEVFARKMXC574QVeeva's S3 bucketapacdatalakeprcaspasp55737apacdatalakeprcaspasp63567Veeva's S3 bucket regionap-southeast-1ap-southeast-1Veeva's S3 Folderproject_kangaroo/landing/veeva/sf_account/project_kangaroo/landing/veeva/sf_address_vod__c/project_kangaroo/landing/veeva/sf_child_account_vod__c/project_kangaroo/landing/veeva/sf_account/project_kangaroo/landing/veeva/sf_address_vod__c/project_kangaroo/landing/veeva/sf_child_account_vod__c/Veeva's Input data file mask * (all files inside above folders)* (all files inside above folders)Veeva's Input data file compressionnonenoneHUB's S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectHUB's S3 Foldermdm/UAT/inbound/APAC_VEEVA/mdm/inbound/APAC_PforceRx/HUS's input data file maskin_nz_[0-9]+.zipin_nz_[0-9]+.zipHUS's input data file compressionZipZipSchedule (is set only for pre-processing DAG)noneAt 06:00 UTC on every day-of-week from Monday through Friday (0 8 * * 1-5)Pre-processing Airflow's DAGinc_batch_apac_veeva_wrapper_stageinc_batch_apac_veeva_wrapper_prodIncremental batch Airflow's DAGinc_batch_apac_veeva_stageinc_batch_apac_veeva_prodData mappingData mapping is described in the following document.ConfigurationConfiguration of this flow is defined in two configuration files. First of these inc_batch_apac_veeva_wrapper.yml specifies the pre-processing DAG configuration and the second inc_batch_apac_veeva.yml defines configuration of DAG for standard incremental batch process. To activate the flow on environment files should be created in the following location inventory/[env name]/group_vars/gw-airflow-services/ and batch names "inc_batch_apac_veeva_wrapper" and "inc_batch_apac_veeva" have to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Changes made in configuration are applied on environment by running Deploy Airflow Components procedure.Below table presents the location of flows configuration files for UAT and PROD env:Configuration fileUATPRODinc_batch_apac_veeva_wrapper.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_veeva_wrapper.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_veeva_wrapper.ymlinc_batch_apac_veeva.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_veeva.ymlhttps://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_veeva.ymlSOPsThere is no dedicated SOP procedures for this flow. However, you must remember that this flow consists of two DAGs which both have to finish successfully.All common SOPs was described in the "Incremental batch flows: SOP" chapter."
},
{
"title": "ODS",
"pageID": "164470116",
"pageLink": "/display/GMDM/ODS",
"content": "ContactsDL-ATP-APC-APACODS-SUPPORT@COMPANY.com - APAC ODS SupportDL-GBI-PFORCERX_ODS_Support@COMPANY.com - EU ODS SupportKaranam, Bindu <Bindu.Karanam@COMPANY.com>; velmurugan, Aarthi <Aarthi.velmurugan@COMPANY.com> - AMER ODS SupportFlowThe flow transforms the ODS's data to Reltio model and loads the result to MDM. Data contains HCPs and HCOs from: HK, ID, IN, MY, PH, PK, SG, TH, TW, VN, BL, FR, GF, GP, MF, MQ, MU, NC, PF, PM, RE, TF, WF, YT, SI, RS countries.This flow is divided into two steps:Pre-proccessing - Copying source files from ODS's bucket and then uploading these to HUB's bucket,Incremental batch - Running the standard incremental batch process.Each of these steps are realized by separated Airflow's DAGs.Input filesUAT APACUAT EUPROD APACPROD EUSupported countriesHK, ID, IN, MY, PH, PK, SG, TH, TW, VN, BLFR, GF, GP, MF, MQ, MU, NC, PF, PM, RE, TF, WF, YT, SI, RSHK, ID, IN, MY, PH, PK, SG, TH, TW, VN, BLFR, GF, GP, MF, MQ, MU, NC, PF, PM, RE, TF, WF, YT, SI, RSODS S3 service accountSRVC-GCMDMS3DEVSRVC-GCMDMS3DEVSRVC-GCMDMS3PRDsvc_gbicc_euw1_prod_partner_gcmdm_rw_s3ODS S3 Access key IDAKIAYCS3RWHN45FC4MOPAKIAYCS3RWHN45FC4MOPAKIAYZQEVFARE64ESXWHAKIA6NIP3JYIMUIQABMXODS S3 bucketapacdatalakeintaspasp100939apacdatalakeintaspasp100939apacdatalakeintaspasp104492pfe-gbi-eu-w1-prod-partner-internalODS S3 folder/APACODSD/GCMDM//APACODSD/GCMDM//APACODSD/GCMDM//eu-dmart-odsd-file-extracts/gateway/GATEWAY/ODS/PROD/GCMDM/ODS Input data file mask ****ODS Input data file compressionzipzipzipzipHUB's S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectpfe-baiaes-eu-w1-projectHUB's S3 Foldermdm/UAT/inbound/ODS/APAC/mdm/UAT/inbound/ODS/EU/mdm/inbound/ODS/APAC/mdm/inbound/ODS/EU/HUS's input data file mask****HUS's input data file compressionzipzipzipzipPre-processing Airflow's DAGmove_ods_apac_export_stagemove_ods_eu_export_stagemove_ods_apac_export_prodmove_ods_eu_export_prodPre-processing Airflow's DAG schedulenonenone0 6 * * 1-50 7 * * 2  (At 07:00 on Tuesday.)Incremental batch Airflow's DAGinc_batch_apac_ods_stageinc_batch_eu_ods_stageinc_batch_apac_ods_prodinc_batch_eu_ods_prodIncremental batch Airflow's DAG schedulenonenone0 8 * * 1-50 8 * * 2 (At 08:00 on Tuesday.)Data mappingData mapping is described in the following document.ConfigurationConfiguration of this flow is defined in two configuration files. First of these move_ods_apac_export.yml specifies the pre-processing DAG configuration and the second inc_batch_apac_ods.yml defines configuration of DAG for standard incremental batch process. To activate the flow on environment files should be created in the following location inventory/[env name]/group_vars/gw-airflow-services/ and batch names "move_ods_apac_export" and "inc_batch_apac_ods" have to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Changes made in configuration are applied on environment by running Deploy Airflow's components procedure.Below table presents the location of flows configuration files for UAT and PROD env:Configuration fileUATPRODmove_ods_apac_export.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/move_ods_apac_export.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/move_ods_apac_export.ymlinc_batch_apac_ods.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_apac_ods.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_apac_ods.ymlmove_ods_eu_export.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/move_ods_eu_export.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/move_ods_eu_export.ymlinc_batch_eu_ods.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_ods.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_ods.ymlSOPsThere is no dedicated SOP procedures for this flow. However, you must remember that this flow consists of two DAGs which both have to finish successfully.All common SOPs was described in the "Incremental batch flows: SOP" chapter."
},
{
"title": "China",
"pageID": "164470000",
"pageLink": "/display/GMDM/China",
"content": "ACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicChina client accesschina-clientKey AuthN/A- "CREATE_HCP"- "CREATE_HCO"- "UPDATE_HCO"- "UPDATE_HCP"- "GET_ENTITIES"- CN- "CN3RDPARTY"- "MDE"- "FACE"- "EVR"- dev-out-full-mde-cn- stage-out-full-mde-cn- dev-out-full-mde-cnContactsQianRu.Zhou@COMPANY.comFlowsBatch merge & unmergeDCR generation process (China DCR)[FL.IN.1] HCP & HCO update processesReportsReports"
},
{
"title": "Corrective batch process for EVR",
"pageID": "164470250",
"pageLink": "/display/GMDM/Corrective+batch+process+for+EVR",
"content": "Corrective batch process for EVR fixes China data using standard incremental batch mechanism. The process gets data from csv file, transforms to json model and loads to Reltio. During loading of changes following HCP's attributes can be changed:Name,Title,SubTypeCode,ValidationStatus,Specific Workplace can be ignored or its ValidationStatus can be changed,Specific MainWorkplace can be ignored.The load saves the changes in Reltio under crosswalk where:type of crosswalk is EVR,crosswalk's value is the same as Reltio id,crosswalk's source table is "corrective".Thanks this, it is easy to find changes that was made by this process.Input filesThe input files are delivered to s3 bucketUATPRODInput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectInput S3 Foldermdm/UAT/inbound/CHINA/EVR/mdm/inbound/CHINA/EVR/Input data file mask evr_corrective_file_[0-9]*.zipevr_corrective_file_[0-9]*.zipCompressionzipzipFormatFlat files in CCV format Flat files in CCV format Exampleevr_corrective_file_20201109.zipevr_corrective_file_20201109.zipSchedulenonenoneAirflow's DAGSinc_batch_china_evr_stageinc_batch_china_evr_prodData mappingMapping from CSV to Reltio's json was describe in this document: evr_corrective_file_format_new.xlsxExample file presented input data: evr_corrective_file_20221215.csvConfigurationFlows configuration is stored in MDM Environment configuration repository. For each environment where the flow should be enabled configuration file inc_batch_china_evr.yml has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name "inc_batch_china" has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table presents the location of flow configuration files for UAT and PROD environment:UATPRODhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_china_evr.ymlhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_china_evr.ymlSOPsThere is no particular SOP procedure for this flow. All common SOPs was described in the "Incremental batch flows: SOP" chapter."
},
{
"title": "Reports",
"pageID": "164469873",
"pageLink": "/display/GMDM/Reports",
"content": "Daily ReportsThere are 4 reports which their preparing is triggered by china_generate_reports_[env] DAG. The DAG starts all dependent report DAGs and then waits for files published by them on s3. When all required files are delivered to s3, DAG sents the email with generted reports to all configured recipients.china_generate_reports_[env]|-- china_import_and_gen_dcr_statistics_report_[env] |-- import_pfdcr_from_reltio_[env] +-- china_dcr_statistics_report_[env]|-- china_import_and_gen_merge_report_[env] |-- import_merges_from_reltio_[env] +-- china_merge_report_[env]|-- china_total_entities_report_[env]+-- china_hcp_by_source_report_[env]Daily DAGs are triggered by DAG china_generate_reportsUATPRODParent DAGchina_generate_reports_stagechina_generate_reports_prodSchedulenoneEvery day at 00:05.Filter applied to all reports:FieldValuecountrycnstatusACTIVEHCP by source reportThe Report shows how many HCPs was delivered to MDM by specific source.The Output  files are delivered to s3 bucket:UATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_hcp_by_source_report_.*.xlsxchina_hcp_by_source_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_hcp_by_source_report_20201113093437.xlsxchina_hcp_by_source_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_hcp_by_source_report_stagechina_hcp_by_source_report_prodReport Templatechina_hcp_by_source_template.xlsxMongo scripthcp_by_source_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionSourceThe source which delivered HCPHCPNumber of all HCPs which has the sourceDaily IncrementalNumber of HCPs modified last utc day.Total entities reportThe report shows total entities count, grouped by entity type, theirs validation status and speaker attribute.The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_total_entities_report_.*.xlsxchina_total_entities_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_total_entities_report_20201113093437.xlsxchina_total_entities_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_total_entities_report_stagechina_total_entities_report_prodReport Templatechina_total_entities_template.xlsxMongo scripttotal_entities_report.jsApplied filters"country" : "CN""status": "ACTIVE"Report fields description:ColumnDescriptionTotal_Hospital_MDMNumber of total hospital MDMTotal_Dept_MDMNumber of total department MDMTotal_HCP_MDMNumber of total HCP MDMValidated_HCPNumber of validated HCPPending_HCPNumber of pending HCPNot_Validated_HCPNumber of validated HCPOther_Status_HCP?Number of HCP with other statusTotal_Speaker Number of total speakersTotal_Speaker_EnabledNumber of enabled speakersTotal_Speaker_DisabledNumber of disabled speakersDCR statistics reportThe report shows statistics about data change requests which were created in MDM. Generating of this report is divided into two steps:Importing PfDataChengeRequest data from Reltio - this step is realized by import_pfdcr_from_reltio_[env] DAG. It schedules export data in Reltio using Export Entities operation and then waits for result. After export file is ready, DAG load its content to mongo,Generating report - generates report based on proviosly imported data. This step is perform by china_dcr_statistics_report_[env] DAG.Both of above steps are run sequentially by china_import_and_gen_dcr_statistics_report_[env] DAG. The Output  files are delivered to s3 bucket:UATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_dcr_statistics_report_.*.xlsxchina_dcr_statistics_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_dcr_statistics_report_20201113093437.xlsxchina_dcr_statistics_report_20201113093437.xlsxAirflow's DAGSchina_dcr_statistics_report_stagechina_dcr_statistics_report_prodReport Templatechina_dcr_statistics_template.xlsxMongo scriptchina_dcr_statistics_report.jsApplied filtersThere are no additional conditions applied to select dataReport fields description:ColumnDescriptionTotal_DCR_MDMTotal number of DCRsNew_HCP_DCRTotal number of DCRs of type NewHCPNew_HCO_L1_DCRTotal number of DCRs of type NewHCOL1New_HCO_L2_DCRTotal number of DCRs of type NewHCOL2MultiAffil_DCRTotal number of DCRs of type MultiAffilNew_HCP_DCR_CompletedTotal number of DCRs of type NewHCP which have completed statusNew_HCO_L1_DCR_CompletedTotal number of DCRs of type NewHCOL1 which have completed statusNew_HCO_L2_DCR_CompletedTotal number of DCRs of type NewHCOL2 which have completed statusMultiAffil_DCR_CompletedTotal number of DCRs of type MultiAffil which have completed statusNew_HCP_AcceptTotal number of DCRs of type NewHCP which were acceptedNew_HCP_UpdateTotal number of DCRs of type NewHCP which were updated during responding for theseNew_HCP_MergeTotal number of DCRs of type NewHCP which were accepted and response had entities to mergeNew_HCP_MergeUpdateTotal number of DCRs of type NewHCP which were updated and response had entities to mergeNew_HCP_RejectTotal number of DCRs of type NewHCP which were rejectedNew_HCP_CloseTotal number of closed DCRs of type NewHCPAffil_AcceptTotal number of DCRs of type MultiAffil which were acceptedAffil_RejectTotal number of DCRs of type MultiAffil which were rejectedAffil_AddTotal number of DCRs of type MultiAffil which data were updated during respondingMultiAffil_DCR_CloseTotal number of closed DCRs of type MultiAffilNew_HCO_L1_UpdateTotal number of closed DCRs of type NewHCOL1 which data were updated during respondingNew_HCO_L1_RejectTotal number of rejected DCRs of type NewHCOL1 New_HCO_L1_CloseTotal number of closed DCRs of type NewHCOL1 New_HCO_L2_AcceptTotal number of accepted DCRs of type NewHCOL2New_HCO_L2_UpdateTotal number of DCRs of type NewHCOL2 which data were updated during respondingNew_HCO_L2_RejectTotal number of rejected DCRs of type NewHCOL2New_HCO_L2_CloseTotal number of closed DCRs of type NewHCOL2New_HCP_DCR_OpenedTotal number of opend DCRs of type NewHCPMultiAffil_DCR_OpenedTotal number of opend DCRs of type MultiAffilNew_HCO_L1_DCR_OpenedTotal number of opend DCRs of type NewHCOL1New_HCO_L2_DCR_OpenedTotal number of opend DCRs of type NewHCOL2New_HCP_DCR_FailedTotal number of failed DCRs of type NewHCPMultiAffil_DCR_FailedTotal number of failed DCRs of type MultiAffilNew_HCO_L1_DCR_FailedTotal number of failed DCRs of type NewHCOL1New_HCO_L2_DCR_FailedTotal number of failed DCRs of type NewHCOL2Merge reportThe report shows statistics about merges which were occurred in MDM. Generating of this report, similar to DCR statistics report, is divided into two steps:Importing merges data from Reltio - this step is performed by import_merges_from_reltio_[env] DAG. It schedules export data in Reltio unsing Export Merge Tree operation and then waits for result. After export file is ready, DAG loads its content to mongo,Generating report - generates report based on previously imported data. This step is performed by china_merge_report_[env] DAG.Both of above steps are run sequentially by china_import_and_gen_merge_report_[env] DAG. The Output  files are delivered to s3 bucket:UATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_merge_report_.*.xlsxchina_merge_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_merge_report_20201113093437.xlsxchina_merge_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_import_and_gen_merge_report_stagechina_import_and_gen_merge_report_prodReport Templatechina_daily_merges_template.xlsxMongo scriptmerge_report.jsApplied filters"country" : "CN"Report fields description:ColumnDescriptionDateDate when merges occurredDaily_Merge_HosptialTotal number of merges on HCODaily_Merge_HCPTotal number of merges on HCPDaily_Manually_Merge_HosptialTotal number of manual merges on HCPDaily_Manually_Merge_HCPTotal number of manual merges on HCPMonthly ReportsThere are 8 monthly reports. All of them are triggered by china_monthly_generate_reports_[env] which then waits for files, generated and published to S3 bucket by each depended DAGs. When all required files exist on S3, DAG prepares the email with all files and sents this defined recipients.china_monthly_generate_reports_[env]|-- china_monthly_hcp_by_SubTypeCode_report_[env]|-- china_monthly_hcp_by_channel_report_[env]|-- china_monthly_hcp_by_city_type_report_[env]|-- china_monthly_hcp_by_department_report_[env]|-- china_monthly_hcp_by_gender_report_[env]|-- china_monthly_hcp_by_hospital_class_report_[env]|-- china_monthly_hcp_by_province_report_[env]+-- china_monthly_hcp_by_source_report_[env]Monthly DAGs are triggered by DAG china_monthly_generate_reportsUATPRODParent DAGchina_monthly_generate_reports_stagechina_monthly_generate_reports_prodHCP by source reportThe report shows how many HCPs were delivered by specific source.The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_source_report_.*.xlsxchina_monthly_hcp_by_source_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_source_report_20201113093437.xlsxchina_monthly_hcp_by_source_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_source_report_stagechina_monthly_hcp_by_source_report_prodReport Templatechina_monthly_hcp_by_source_template.xlsxMongo scriptmonthly_hcp_by_source_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionSourceSource that delivered HCPHCPNumber of all HCPs which has the sourceHCP by channel reportThe report presents amount of HCPs which were delivered to MDM through specific Channel.The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_channel_report_.*.xlsxchina_monthly_hcp_by_channel_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_channel_report_20201113093437.xlsxchina_monthly_hcp_by_channel_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_channel_report_stagechina_monthly_hcp_by_channel_report_prodReport Templatechina_monthly_hcp_by_channel_template.xlsxMongo scriptmonthly_hcp_by_channel_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionChannelChannel nameHCPNumber of all HCPs which match the channelHCP by SubTypeCode reportThe report presents HCPs grouped by its Medical Title (SubTypeCode)The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_SubTypeCode_report_.*.xlsxchina_monthly_hcp_by_SubTypeCode_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_SubTypeCode_report_20201113093437.xlsxchina_monthly_hcp_by_SubTypeCode_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_SubTypeCode_report_stage china_monthly_hcp_by_SubTypeCode_report_prodReport Templatechina_monthly_hcp_by_SubTypeCode_template.xlsxMongo scriptmonthly_hcp_by_SubTypeCode_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionMedical TitleMedical Title (SubTypeCode) of HCPHCPNumber of all HCPs which match the medical titleHCP by city type reportThe report shows amount of HCP which works in specific city type. Type of city in not avaiable in MDM data. To know what is type of specific citys report uses additional collection chinaGeography which has mapping between city's name and its type. Data in the collection can be updated on request of china's team.The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_city_type_report_.*.xlsxchina_monthly_hcp_by_city_type_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_city_type_report_20201113093437.xlsxchina_monthly_hcp_by_city_type_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_city_type_report_stage china_monthly_hcp_by_city_type_report_prodReport Templatechina_monthly_hcp_by_city_type_template.xlsxMongo scriptmonthly_hcp_by_city_type_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionCity TypeCity Type taken from chinaGeography collection which match entity.attributes.Workplace.value.MainHCO.value.Address.value.City.valueHCPNumber of all HCPs which match the city typeHCP by department reportThe report presents the HCPs grouped by department where they work.The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_department_report_.*.xlsxchina_monthly_hcp_by_department_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_department_report_20201113093437.xlsxchina_monthly_hcp_by_department_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_department_report_stage china_monthly_hcp_by_department_report_prodReport Templatechina_monthly_hcp_by_department_template.xlsxMongo scriptmonthly_hcp_by_department_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionDeptDepartment's nameHCPNumber of all HCPs which match the deptHCP by gender reportThe report presents the HCPs grouped by gender.The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_gender_report_.*.xlsxchina_monthly_hcp_by_gender_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_gender_report_20201113093437.xlsxchina_monthly_hcp_by_gender_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_gender_report_stage china_monthly_hcp_by_gender_report_prodReport Templatechina_monthly_hcp_by_gender_template.xlsxMongo scriptmonthly_hcp_by_gender_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionGenderGenderHCPNumber of all HCPs which match the genderHCP by hospital class reportThe report presents the HCPs grouped by theirs department.The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_hospital_class_report_.*.xlsxchina_monthly_hcp_by_hospital_class_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_hospital_class_report_20201113093437.xlsxchina_monthly_hcp_by_hospital_class_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_hospital_class_report_stage china_monthly_hcp_by_hospital_class_report_prodReport Templatechina_monthly_hcp_by_hospital_class_template.xlsxMongo scriptmonthly_hcp_by_hospital_class_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ColumnDescriptionClassClassificationHCPNumber of all HCPs which match the classHCP by province reportThe report presents the HCPs grouped by province where they work.The Output  files are delivered to s3 bucketUATPRODOutput S3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectOutput S3 Foldermdm/UAT/outbound/china_reports/daily/mdm/outbound/china_reports/daily/Output data file mask china_monthly_hcp_by_province_report_.*.xlsxchina_monthly_hcp_by_province_report_.*.xlsxFormatMicrosoft Excel xlsxMicrosoft Excel xlsxExamplechina_monthly_hcp_by_province_report_20201113093437.xlsxchina_monthly_hcp_by_province_report_20201113093437.xlsxSchedulenonenoneAirflow's DAGSchina_monthly_hcp_by_province_report_stage china_monthly_hcp_by_province_report_prodReport Templatechina_monthly_hcp_by_province_template.xlsxMongo scriptmonthly_hcp_by_province_report.jsApplied filters"country" : "CN""entityType": "configuration/entityTypes/HCP""status": "ACTIVE"Report fields description:ProvinceName of provinceHCPNumber of all HCPs which match the ProvinceSOPsHow can I check the status of generating reports?Status of generating reports can be chacked by verification of task statuses on main DAGs - china_generate_reports_[env] for daily reports or china_monthly_generate_reports_[env] for monthly reports. Both of these DAGs have task "sendEmailReports" which waits for files generated by dependent DAGs. If required files are not published to S3 in confgured amount of time, the task will fail with following message:\n[2020-11-27 12:12:54,085] {{docker_operator.py:252}} INFO - Caught: java.lang.RuntimeException: ERROR: Elapsed time 300 minutes. Timeout exceeded: 300\n[2020-11-27 12:12:54,086] {{docker_operator.py:252}} INFO - java.lang.RuntimeException: ERROR: Elapsed time 300 minutes. Timeout exceeded: 300\n[2020-11-27 12:12:54,086] {{docker_operator.py:252}} INFO - at SendEmailReports.getListOfFilesLoop(sendEmailReports.groovy:221)\n\tat SendEmailReports.processReport(sendEmailReports.groovy:257)\n[2020-11-27 12:12:54,290] {{docker_operator.py:252}} INFO - at SendEmailReports$processReport.call(Unknown Source)\n\tat sendEmailReports.run(sendEmailReports.groovy:279)\n[2020-11-27 12:12:55,552] {{taskinstance.py:1058}} ERROR - docker container failed: {'StatusCode': 1}\nIn this case you have to check the status of all dependent DAGs to find the reason on failure, resolve the issue and retry all failed tasks starting by tasks in dependend DAGs and finishing by task in main DAG.Daily reports failed due to error durign importing data from Reltio. What to do?If you are able to see that DAGs import_pfdcr_from_reltio_[env] or import_merges_from_reltio_[env] in failed state, it probably means that export data from Reltio took longer then usual. To confirm this supposing you have to show details of importing DAG and check status of waitingForExportFile task. If it has failed state and in the logs you can see following messages:\n[2020-12-04 12:09:10,957] {{s3_key_sensor.py:88}} INFO - Poking for key : s3://pfe-baiaes-eu-w1-project/mdm/reltio_exports/merges_from_reltio_20201204T000718/_SUCCESS\n[2020-12-04 12:09:11,074] {{taskinstance.py:1047}} ERROR - Snap. Time is OUT.\nTraceback (most recent call last):\n File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 922, in _run_raw_task\n result = task_copy.execute(context=context)\n File "/usr/local/lib/python3.7/site-packages/airflow/sensors/base_sensor_operator.py", line 116, in execute\n raise AirflowSensorTimeout('Snap. Time is OUT.')\nairflow.exceptions.AirflowSensorTimeout: Snap. Time is OUT.\n[2020-12-04 12:09:11,085] {{taskinstance.py:1078}} INFO - Marking task as FAILED.\nYou can be pretty sure that the export is still processed on Reltio side. You can confirm this by using tasks api. If on the returned list you are able to see tasks in processing state, it means that MDM still works on this export. To fix this issue in DAG you have to restart the failed task. The DAG will start checking existance of export file once agine."
},
{
"title": "CDW (AMER)",
"pageID": "164470121",
"pageLink": "/pages/viewpage.action?pageId=164470121",
"content": "ContactsNarayanan, Abhilash <Abhilash.KadampanalNarayanan@COMPANY.com>Balan, Sakthi <Sakthi.Balan@COMPANY.com>Raman, Krishnan <Krishnan.Raman@COMPANY.com>GatewayAMER(manager)NameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicCDW user (NPROD)cdwExternal OAuth2CDW-MDM_client["CREATE_HCO","UPDATE_HCO","GET_ENTITIES","USAGE_FLAG_UPDATE"]["US"]["SHS","SHS_MCO","IQVIA_MCO","CENTRIS","SAP","IQVIA_DDD","ONEKEY","DT_340b","DEA","HUB_CALLBACK","IQVIA_RAWDEA","IQVIA_PDRP","ENGAGE","GRV","ICUE","KOL_OneView","COV","ENGAGE 1.0","GRV","IQVIA_RX","MILLIMAN_MCO","ICUE","KOL_OneView","SHS_RX","MMIT","INTEGRICHAIN_TRADE_PARTNER","INTEGRICHAIN_SHIP_TO","EMDS_VVA","APUS_VVA","BMS (NAV)","EXAS","POLARIS_DM","ANRO_DM","ASHVVA","MM_C1st","KFIS","DVA","Reltio","DDDV","IQVIA_DDD_ZIP","867","MYOV_VVA","COMPANY_ACCTS"]CDW user (PROD)cdwExternal OAuth2CDW-MDM_client["CREATE_HCO","UPDATE_HCO","GET_ENTITIES","USAGE_FLAG_UPDATE"]["US"]["SHS","SHS_MCO","IQVIA_MCO","CENTRIS","SAP","IQVIA_DDD","ONEKEY","DT_340b","DEA","HUB_CALLBACK","IQVIA_RAWDEA","IQVIA_PDRP","ENGAGE","GRV","ICUE","KOL_OneView","COV","ENGAGE 1.0","GRV","IQVIA_RX","MILLIMAN_MCO","ICUE","KOL_OneView","SHS_RX","MMIT","INTEGRICHAIN_TRADE_PARTNER","INTEGRICHAIN_SHIP_TO","EMDS_VVA","APUS_VVA","BMS (NAV)","EXAS","POLARIS_DM","ANRO_DM","ASHVVA","MM_C1st","KFIS","DVA","Reltio","DDDV","IQVIA_DDD_ZIP","867","MYOV_VVA","COMPANY_ACCTS"]FlowsFlowDescriptionSnowflake: Events publish flowEvents are published to snowflakeSnowflake: Base tables refreshTable is refreshed (every 2 hours in prod) with those eventsSnowflake MDMTable are read by an ETL process implemented by COMPANY Team Update Usage TagsUpdate BESTCALLEDON used flag on addressesCDW docs: Best Address Data flowClient software Snowpipe "
},
{
"title": "ETL - COMPANY (GBLUS)",
"pageID": "164470236",
"pageLink": "/pages/viewpage.action?pageId=164470236",
"content": "ContactsNayan, Rajeev <Rajeev.Nayan3@COMPANY.com>Duvvuri, Satya <Satya.Duvvuri@COMPANY.com>ACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicBatchesETL batch load usermdmetl_nprodOAuth2SVC-MDMETL_client- "CREATE_HCP"- "CREATE_HCO"- "CREATE_MCO"- "CREATE_BATCH"- "GET_BATCH"- "MANAGE_STAGE"- "CLEAR_CACHE_BATCH"US- "SHS"- "SHS_MCO"- "IQVIA_MCO"- "CENTRIS"- "ENGAGE 1.0"- "GRV"- "IQVIA_DDD"- "SAP"- "ONEKEY"- "IQVIA_RAWDEA"- "IQVIA_PDRP"- "COV"- "IQVIA_RX"- "MILLIMAN_MCO"- "ICUE"- "KOL_OneView"- "SHS_RX"- "MMIT"- "INTEGRICHAIN"N/Abatches: "Symphony": - "HCPLoading" "Centris": - "HCPLoading" "IQVIA_DDD": - "HCOLoading" - "RelationLoading" "SAP": - "HCOLoading" "ONEKEY": - "HCPLoading" - "HCOLoading" - "RelationLoading" "IQVIA_RAWDEA": - "HCPLoading" "IQVIA_PDRP": - "HCPLoading" "PFZ_CUSTID_SYNC": - "COMPANYCustIDLoading" "OneView": - "HCOLoading" "HCPM": - "HCPLoading" "SHS_MCO": - "MCOLoading" - "RelationLoading" "IQVIA_MCO": - "MCOLoading" - "RelationLoading" "IQVIA_RX": - "HCPLoading" "MILLIMAN_MCO": - "MCOLoading" - "RelationLoading" "VEEVA": - "HCPLoading" - "HCOLoading" - "MCOLoading" - "RelationLoading" "SHS_RX": - "HCPLoading" "MMIT": - "MCOLoading" - "RelationLoading" "DDD_SAP": - "RelationLoading" "INTEGRICHAIN": - "HCOLoading"...ETL Get/Resubmit Errorsmdmetl_nprodOAuth2SVC-MDMETL_client- "GET_ERRORS"- "RESUBMIT_ERRORS"USALLN/AN/AFlowsBatch Controller: creating and updating batch instance - the user invokes the batch-service API to create a new batch instanceBulk Service: loading bulk data - the user invokes the batch-service API to load the dataAfter load, the processing starts - ETL BatchesClient software Informatica ETL data loaderSOPsAdding a New BatchCache Address ID Clear (Remove Duplicates) ProcessCache Address ID Update ProcessManager: Resubmitting Failed RecordsSOP in WikiManual Cache ClearUpdating ETL Dictionaries in ConsulUpdating Dictionary"
},
{
"title": "KOL_ONEVIEW (GBLUS)",
"pageID": "164469966",
"pageLink": "/pages/viewpage.action?pageId=164469966",
"content": "ContactsBrahma, Bagmita <Bagmita.Brahma2@COMPANY.com>Solanki, Hardik <Hardik.Solanki@COMPANY.com>Tikyani, Devesh <Devesh.Tikyani@COMPANY.com>DL DL-iMed_L3@COMPANY.comACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicKOL_OneView userkol_oneviewOAuth2KOL-MDM-PFORCEOL_client- "CREATE_HCP"- "UPDATE_HCP"- "CREATE_HCO"- "UPDATE_HCO"- "GET_ENTITIES"- "LOOKUPS"USKOL_OneViewN/AKOL_OneView TOPICN/AKafka JassN/A"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'KOL_ONEVIEW')&& exchange.in.headers.eventType in ['full'] && ['KOL_OneView'].intersect(exchange.in.headers.eventSource) && exchange.in.headers.objectType in ['HCP', 'HCO']"USKOL_OneViewprod-out-full-koloneview-allFlowsCreate/Update HCP/HCO/MCOGet EntityCreate RelationsClient software Kafka Sink JDBC connector"
},
{
"title": "GRV (GBLUS)",
"pageID": "164469964",
"pageLink": "/pages/viewpage.action?pageId=164469964",
"content": "ContactsBablani, Vijay <Vijay.Bablani@COMPANY.com>Jain, Somya <Somya.Jain@COMPANY.com>Adhvaryu, Amish <Amish.Adhvaryu@COMPANY.com>Reynolds, Lori <Lori.Reynolds@COMPANY.com>Alphonso, Venisa <Venisa.Alphonso@COMPANY.com>Patel, Jay <Jay.Patel@COMPANY.com>Anumalasetty, Jayasravani <Jayasravani.Anumalasetty@COMPANY.com>ACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicGRV UsergrvOAuth2GRV-MDM_client- "GET_ENTITIES"- "LOOKUPS"- "VALIDATE_HCP"- "CREATE_HCP"- "UPDATE_HCP"US- "GRV"N/AGRV-AIS-MDM Usergrv_aisOAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●- "GET_ENTITIES"- "LOOKUPS"- "VALIDATE_HCP"- "CREATE_HCP"- "UPDATE_HCP"- "CREATE_HCO"- "UPDATE_HCO"US- "GRV"- "CENTRIS"- "ENGAGE"N/AGRV TOPICN/AKafka JassN/A"(exchange.in.headers.reconciliationTarget==null)&& exchange.in.headers.eventType in ['full_not_trimmed'] && ['GRV'].intersect(exchange.in.headers.eventSource)&& exchange.in.headers.objectType in ['HCP'] && exchange.in.headers.eventSubtype in ['HCP_CHANGED']"USGRVprod-out-full-grv-allFlowsCreate/Update HCP/HCO/MCOGet EntityCreate RelationsClient software APIKafka connector"
},
{
"title": "GRACE (GBLUS)",
"pageID": "164469962",
"pageLink": "/pages/viewpage.action?pageId=164469962",
"content": "ContactsJeffrey.D.LoVetere@COMPANY.comwilliam.nerbonne@COMPANY.comKalyan.Kanumuru@COMPANY.comBrigilin.Stanley@COMPANY.comACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicGRACE UsergraceOAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●- "GET_ENTITIES"- "LOOKUPS"US- "GRV"- "CENTRIS"- "ENGAGE"N/AFlowsGet EntityClient software API - read only"
},
{
"title": "KOL_ONEVIEW (EMEA, AMER, APAC)",
"pageID": "164470136",
"pageLink": "/pages/viewpage.action?pageId=164470136",
"content": "ContactsDL-SFA-INF_Support_PforceOL@COMPANY.comSolanki, Hardik (US - Mumbai) <hsolanki@COMPANY.com>Yagnamurthy, Maanasa (US - Hyderabad) <myagnamurthy@COMPANY.com>ACLsEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicKOL_ONEVIEW user (NPROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_clientKOL-MDM_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AD","AE","AO","AR","AU","BF","BH","BI","BJ","BL","BO","BR","BW","BZ","CA","CD","CF","CG","CH","CI","CL","CM","CN","CO","CP","CR","CV","DE","DJ","DK","DO","DZ","EC","EG","ES","ET","FI","FO","FR","GA","GB","GF","GH","GL","GM","GN","GP","GQ","GT","GW","HN","IE","IL","IN","IQ","IR","IT","JO","JP","KE","KW","LB","LR","LS","LY","MA","MC","MF","MG","ML","MQ","MR","MU","MW","MX","NA","NC","NG","NI","NZ","OM","PA","PE","PF","PL","PM","PT","PY","QA","RE","RU","RW","SA","SD","SE","SL","SM","SN","SV","SY","SZ","TD","TF","TG","TN","TR","TZ","UG","UY","VE","WF","YE","YT","ZA","ZM","ZW"]GB- "KOL_OneView"KOL_ONEVIEW user (PROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_clientKOL-MDM_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AD","AE","AO","AR","AU","BF","BH","BI","BJ","BL","BO","BR","BW","BZ","CA","CD","CF","CG","CH","CI","CL","CM","CN","CO","CP","CR","CV","DE","DJ","DK","DO","DZ","EC","EG","ES","ET","FO","FR","GA","GB","GF","GH","GL","GM","GN","GP","GQ","GT","GW","HN","IE","IL","IN","IQ","IR","IT","JO","JP","KE","KW","LB","LR","LS","LY","MA","MC","MF","MG","ML","MQ","MR","MU","MW","MX","NA","NC","NG","NI","NZ","OM","PA","PE","PF","PL","PM","PT","PY","QA","RE","RU","RW","SA","SD","SL","SM","SN","SV","SY","SZ","TD","TF","TG","TN","TR","TZ","UG","UY","VE","WF","YE","YT","ZA","ZM","ZW"]GB- "KOL_OneView"AMERNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicKOL_ONEVIEW user (NPROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AR","BR","CA","MX","UY"]CA- "KOL_OneView"KOL_ONEVIEW user (PROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AR","BR","CA","MX","UY"]CA- "KOL_OneView"APACNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicKOL_ONEVIEW user (NPROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AU","IN","KR","NZ","JP"]JP- "KOL_OneView"KOL_ONEVIEW user (PROD)kol_oneviewExternal OAuth2KOL-MDM-PFORCEOL_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","GET_ENTITIES","LOOKUPS"]["AU","IN","KR","NZ","JP"]JP- "KOL_OneView"KafkaEMEAEnvNameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsemea-prodKol_oneviewkol_oneview"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'KOL_ONEVIEW') && exchange.in.headers.eventType in ['full'] && ['KOL_OneView'].intersect(exchange.in.headers.eventSource) && exchange.in.headers.objectType in ['HCP', 'HCO'] && exchange.in.headers.country in ['ie', 'gb']"-${env}-out-full-koloneview-all3emea-devKol_oneviewkol_oneview-${env}-out-full-koloneview-all3emea-qaKol_oneviewkol_oneview-${env}-out-full-koloneview-all3emea-stageKol_oneviewkol_oneview-${env}-out-full-koloneview-all3AMEREnvNameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsgblus-prodKol_oneviewkol_oneview"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'KOL_OneView') && exchange.in.headers.eventType in ['full'] && ['KOL_OneView'].intersect(exchange.in.headers.eventSource) && exchange.in.headers.objectType in ['HCP', 'HCO']"-${env}-out-full-koloneview-all3gblus-devKol_oneviewkol_oneview-${env}-out-full-koloneview-all3gblus-qaKol_oneviewkol_oneview-${env}-out-full-koloneview-all3gblus-stageKol_oneviewkol_oneview-${env}-out-full-koloneview-all3"
},
{
"title": "GRV (EMEA, AMER)",
"pageID": "164470150",
"pageLink": "/pages/viewpage.action?pageId=164470150",
"content": "ContactsTODOGatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGRV user (NPROD)grvExternal OAuth2GRV-MDM_client- GET_ENTITIES- LOOKUPS- VALIDATE_HCP["CA"]GBGRVN/AGRV user (PROD)grvExternal OAuth2GRV-MDM_client- GET_ENTITIES- LOOKUPS- VALIDATE_HCP["CA"]GBGRVN/AAMER(manager)NameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGRV user (NPROD)grvExternal OAuth2GRV-MDM_client["GET_ENTITIES","LOOKUPS","VALIDATE_HCP","CREATE_HCP","UPDATE_HCP"]["US"]GRVN/AGRV user (PROD)grvExternal OAuth2GRV-MDM_client["GET_ENTITIES","LOOKUPS","VALIDATE_HCP","CREATE_HCP","UPDATE_HCP"]["US"]GRVN/AKafkaAMEREnvNameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsgblus-prodGrvgrv"(exchange.in.headers.reconciliationTarget==null) && exchange.in.headers.eventType in ['full_not_trimmed'] && ['GRV'].intersect(exchange.in.headers.eventSource) && exchange.in.headers.objectType in ['HCP'] && exchange.in.headers.eventSubtype in ['HCP_CHANGED']"- ${env}-out-full-grv-allgblus-devGrvgrv- ${local_env}-out-full-grv-allgblus-qaGrvgrv- ${local_env}-out-full-grv-allgblus-stageGrv grv- ${local_env}-out-full-grv-all"
},
{
"title": "GANT (Global, EMEA, AMER, APAC)",
"pageID": "164470148",
"pageLink": "/pages/viewpage.action?pageId=164470148",
"content": "ContactsNadpolla, Gangadhar (Gangadhar.Nadpolla@COMPANY.com)GatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGANT UsergantExternal OAuth2GANT-MDM_client- "GET_ENTITIES"- "LOOKUPS"["AD", "AG", "AI", "AM", "AN","AR", "AT", "AU", "AW", "BA","BB", "BE", "BG", "BL", "BM","BO", "BQ", "BR", "BS", "BY","BZ", "CA", "CH", "CL", "CN","CO", "CP", "CR", "CW", "CY","CZ", "DE", "DK", "DO", "DZ","EC", "EE", "EG", "ES", "FI","FO", "FR", "GB", "GF", "GP","GR", "GT", "GY", "HK", "HN","HR", "HU", "ID", "IE", "IL","IN", "IT", "JM", "JP", "KR","KY", "KZ", "LC", "LT", "LU","LV", "MA", "MC", "MF", "MQ","MU", "MX", "MY", "NC", "NI","NL", "NO", "NZ", "PA", "PE","PF", "PH", "PK", "PL", "PM","PN", "PT", "PY", "RE", "RO","RS", "RU", "SA", "SE", "SG","SI", "SK", "SV", "SX", "TF","TH", "TN", "TR", "TT", "TW","UA", "UY", "VE", "VG", "VN","WF", "XX", "YT", "ZA"]GBGRVN/AAMERAction RequiredUser configurationPingFederate UsernameGANT-MDM_clientCountriesBrazilTenantAMEREnvironments (PROD/NON-PROD/ALL)ALLAPI Servicesext-api-gw-amer-stage/entities,  ext-api-gw-amer-stage/lookups.SourcesONEKEY,CRMMI,MAPPBusiness JustificationAs we are fetching hcp data from MDM COMPANY Instance, Earlier It was MDM IQVIA instanceNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGANT UsergantExternal OAuth2GANT-MDM_client- "GET_ENTITIES"- "LOOKUPS"["BR"]BR- ONEKEY- CRMMI- MAPPN/AAPACAction RequiredUser configurationPingFederate UsernameGANT-MDM_clientCountriesIndiaTenantAPACEnvironments (PROD/NON-PROD/ALL)ALLAPI Servicesext-api-gw-apac-stage/entities,  ext-api-gw-apac-stage/lookups.SourcesONEKEY,CRMMI,MAPPBusiness JustificationAs we are fetching hcp data from MDM COMPANY Instance, Earlier It was MDM IQVIA instanceNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGANT UsergantExternal OAuth2GANT-MDM_client- "GET_ENTITIES"- "LOOKUPS"["IN"]IN- ONEKEY- CRMMI- MAPPN/A"
},
{
"title": "Medic (EMEA, AMER, APAC)",
"pageID": "164470140",
"pageLink": "/pages/viewpage.action?pageId=164470140",
"content": "ContactsDL-F&BO-MEDIC@COMPANY.comGatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicMedic user (NPROD)medicExternal OAuth2MEDIC-MDM_client●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]IE["MEDIC"]Medic user (PROD)medicExternal OAuth2MEDIC-MDM_client●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]IE["MEDIC"]AMERNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicMedic  user (NPROD)medicExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●, ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ","US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]Medic user (PROD)medicExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ","US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]APACNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicMedic user (NPROD)medicExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]IN["MEDIC"]Medic user (PROD)medicExternal OAuth2MEDIC-MDM_client●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","BR","CO","FR","GR","IE","IN","IT","NZ"]IN["MEDIC"]"
},
{
"title": "PTRS (EMEA, AMER, APAC)",
"pageID": "164470165",
"pageLink": "/pages/viewpage.action?pageId=164470165",
"content": "RequirementsEnvPublisher routing ruleTopicemea-prod(ptrs-eu)"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_RECONCILIATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf', 'br', 'mx', 'id', 'pt'] && exchange.in.headers.objectType in ['HCP', 'HCO']"01/Mar/23 4:14 AM[10:13 AM] Shanbhag, BhushanOkay in that case we want Turkey market's events to come from emea-prod-out-full-ptrs-global2 topic only. ${env}-out-full-ptrs-euemea prod and nprodsAdding MC and AD to out-full-ptrs-eu15/05/2023Sagar: Hi Karol,Can you please add below counties for France to country configuration list for FRANCE EMEA Topics (Prod, Stage QA & Dev)1. Monaco2. Andorra\n MR-6236\n -\n Getting issue details...\n STATUS\n ${env}-out-full-ptrs-euContactsAPI: Prapti.Nanda@COMPANY.com;Varun.ArunKumar@COMPANY.comKafka: Sagar.Bodala@COMPANY.comGatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicPTRS user (NPROD)ptrsExternal OAuth2PTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]["AG","AI","AN","AR","AW","BB","BL","BM","BO","BR","BS","BZ","CL","CO","CR","CW","DO","EC","FR","GF","GP","GT","GY","HN","ID","IL","JM","KY","LC","MF","MQ","MU","MX","NC","NI","PA","PE","PF","PH","PM","PN","PT","PY","RE","SV","SX","TF","TR","TT","UY","VE","VG","WF","YT"]["PTRS"]PTRS user (PROD)ptrsExternal OAuth2PTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]["AG","AI","AN","AR","AW","BB","BL","BM","BO","BR","BS","BZ","CL","CO","CR","CW","DO","EC","FR","GF","GP","GT","GY","HN","ID","IL","JM","KY","LC","MF","MQ","MU","MX","NC","NI","PA","PE","PF","PH","PM","PN","PT","PY","RE","SV","SX","TF","TR","TT","UY","VE","VG","WF","YT"]["PTRS"]AMER(manager)NameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicPTRS user (NPROD)ptrsExternal OAuth2PTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]["MX","BR"]["PTRS"]PTRS user (PROD)ptrsExternal OAuth2PTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES","LOOKUPS"]["MX","BR"]["PTRS"]APAC(manager)NameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicPTRS user (NPROD)ptrsExternal OAuth2PTRS_RELTIO_ClientPTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES"]["ID","JP","PH"]["VOC","PTRS"]PTRS user (PROD)ptrsExternal OAuth2PTRS_RELTIO_ClientPTRS-MDM_client["CREATE_HCO","CREATE_HCP","GET_ENTITIES"]["JP"]["VOC","PTRS"]KafkaEMEAEnvNameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsemea-prod(ptrs-eu)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_RECONCILIATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf', 'br', 'mx', 'id', 'pt', 'ad', 'mc'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-eu3emea-prod (ptrs-global2)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_GLOBAL2_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['tr'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-global23emea-dev (ptrs-global2)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_GLOBAL2_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['tr'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-global23emea-qa (ptrs-eu)Ptrsptrsemea-dev-ptrs-eu"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-eu3emea-qa (ptrs-global2)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_GLOBAL2_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['tr'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-global23emea-stage (ptrs-eu)Ptrsptrsemea-stage-ptrs-eu"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf', 'pt', 'id', 'tr'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-eu3emea-stage (ptrs-global2)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_GLOBAL2_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['tr'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-global23AMEREnvNameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsamer-prod(ptrs-amer)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_AMER_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['mx', 'br'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-amer3amer-dev (ptrs-amer)Ptrsptrsamer-dev-ptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_AMER_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['mx', 'br'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-amer3amer-qa (ptrs-amer)Ptrsptrsamer-qa-ptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_AMER_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['mx', 'br'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-amer3amer-stage (ptrs-amer)Ptrsptrsamer-stage-ptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_AMER_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['mx', 'br'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-amer3APACEnvNameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsapac-dev (ptrs-apac)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_APAC_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['pk'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-apacapac-qa (ptrs-apac)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_APAC_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['pk'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-apacapac-stage (ptrs-apac)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_APAC_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['pk'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-apacGBLEnvNameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsgbl-prodPtrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['co', 'mx', 'br', 'ph'] && exchange.in.headers.objectType in ['HCP', 'HCO']"- ${env}-out-full-ptrsgbl-prod (ptrs-eu)Ptrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU_REGENERATION') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf'] && exchange.in.headers.objectType in ['HCP', 'HCO']"${env}-out-full-ptrs-eugbl-prod (ptrs-porind)Ptrsptrsexchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['id', 'pt'] && exchange.in.headers.objectType in ['HCP', 'HCO'] && !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED') && (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_PORIND_REGENERATION')"${env}-out-full-ptrs-porindgbl-devPtrsptrs"exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['co', 'mx', 'br', 'ph', 'cl', 'tr'] && exchange.in.headers.objectType in ['HCP', 'HCO'] && !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED') && (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_REGENERATION')"- ${env}-out-full-ptrs20gbl-dev (ptrs-eu)Ptrsptrsptrs_nprod"exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf'] && exchange.in.headers.objectType in ['HCP', 'HCO'] && !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED') && (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU_REGENERATION')"- ${env}-out-full-ptrs-eugbl-dev (ptrs-porind)Ptrsptrs"exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['id', 'pt'] && exchange.in.headers.objectType in ['HCP', 'HCO'] && !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED') && (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_PORIND_REGENERATION')"- ${env}-out-full-ptrs-porindgbl-qa (ptrs-eu)Ptrsptrs"exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf'] && exchange.in.headers.objectType in ['HCP', 'HCO'] && (exchange.in.headers.reconciliationTarget==null)"- ${env}-out-full-ptrs-eu20gbl-stagePtrsptrs"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_LATAM') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['co', 'mx', 'br', 'ph', 'cl','tr'] && exchange.in.headers.objectType in ['HCP', 'HCO']"- ${env}-out-full-ptrsgbl-stage (ptrs-eu)Ptrsptrsptrs_nprod"(exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_EU') && exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['fr', 'gf', 'pf', 'gp', 'mq', 'yt', 'nc', 're', 'bl', 'mf', 'wf', 'pm', 'tf'] && exchange.in.headers.objectType in ['HCP', 'HCO']"- ${env}-out-full-ptrs-eugbl-stage (ptrs-porind)Ptrsptrs"exchange.in.headers.eventType in ['full'] && exchange.in.headers.country in ['id', 'pt'] && exchange.in.headers.objectType in ['HCP', 'HCO'] && !exchange.in.headers.eventSubtype.endsWith('_MATCHES_CHANGED') && (exchange.in.headers.reconciliationTarget==null || exchange.in.headers.reconciliationTarget == 'PTRS_PORIND_REGENERATION')"- ${env}-out-full-ptrs-porind"
},
{
"title": "OneMed (EMEA)",
"pageID": "164470163",
"pageLink": "/pages/viewpage.action?pageId=164470163",
"content": "ContactsMarsha.Wirtel@COMPANY.com;AnveshVedula.Chalapati@COMPANY.comGatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicOneMed user (NPROD)onemedExternal OAuth2ONEMED-MDM_client["GET_ENTITIES","LOOKUPS"]["AR","AU","BR","CH","CN","DE","ES","FR","GB","IE","IL","IN","IT","JP","MX","NZ","PL","SA","TR"]IE["CICR","CN3RDPARTY","CRMMI","EVR","FACE","GCP","GRV","KOL_OneView","LocalMDM","MAPP","MDE","OK","Reltio","Rx_Audit"]OneMeduser (PROD)onemedExternal OAuth2ONEMED-MDM_client["GET_ENTITIES","LOOKUPS"]["AR","AU","BR","CH","CN","DE","ES","FR","GB","IE","IL","IN","IT","JP","MX","NZ","PL","SA","TR"]IE["CICR","CN3RDPARTY","CRMMI","EVR","FACE","GCP","GRV","KOL_OneView","LocalMDM","MAPP","MDE","OK","Reltio","Rx_Audit"]"
},
{
"title": "GRACE (EMEA, AMER, APAC)",
"pageID": "164470161",
"pageLink": "/pages/viewpage.action?pageId=164470161",
"content": "ContactsDL-AIS-Mule-Integration-Support@COMPANY.comRequirementsPartial requirementsSent by Amish Adhvaryuaction neededNeed Plugin Configuration for below usernamesusernameGRACE MAVENS SFDC - DEV - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● - DevGRACE MAVENS SFDC - STG - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● - StageGRACE MAVENS SFDC - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● - ProdcountriesAU,NZ,IN,JP,KR (APAC) and AR, UY, MX (AMER)tenantAPAC and AMERenvironments (prod/nonprods/all)ALLAPI services exposedHCP HCO MCO Search, LookupsSourcesGraceBusiness justificationClient ID used by GRACE application to search HCP and HCOsGatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGRACE usergraceExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO","FR","GB","GD","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","NO","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SR","SV","SX","TF","TH","TN","TR","TT","TW","UA","US","UY","VE","VG","VN","WF","XX","YT","ZA"]GB["NONE"]N/AGRACE UsergraceExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO","FR","GB","GD","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","NO","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SR","SV","SX","TF","TH","TN","TR","TT","TW","UA","US","UY","VE","VG","VN","WF","XX","YT"]GB["NONE"]N/AAMERNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGRACE usergraceExternal OAuth2 (all)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["CA","US","AR","UY","MX"]["NONE"]N/AExternal OAuth2 (amer-dev)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●External OAuth2 (gblus-stage)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●External OAuth2 (amer-stage)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●GRACE UsergraceExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AR","AU","BR","CA","DE","ES","FR","GB","GF","GP","IN","IT","JP","KR","MC","MF","MQ","MX","NC","NZ","PF","PM","RE","SA","TR","US","UY"]["NONE"]N/AAPACNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicGRACE usergraceExternal OAuth2 (all)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AR","AU","BR","CA","HK","ID","IN","JP","KR","MX","MY","NZ","PH","PK","SG","TH","TW","US","UY","VN"]["NONE"]N/AExternal OAuth2 (apac-stageb469b84094724d74adb9ff7224588647GRACE UsergraceExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AR","AU","BR","CA","DE","ES","FR","GB","GF","GP","IN","IT","JP","KR","MC","MF","MQ","MX","NC","NZ","PF","PM","RE","SA","TR","US","UY"]["NONE"]N/A"
},
{
"title": "Snowflake (Global, GBLUS)",
"pageID": "164469783",
"pageLink": "/pages/viewpage.action?pageId=164469783",
"content": "ContactsNarayanan, Abhilash <Abhilash.KadampanalNarayanan@COMPANY.com>ACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicSnowflake topicSnowflake TopicKafka JAASN/Aexchange.in.headers.eventType in ['full_not_trimmed']exchange.in.headers.objectType in ['HCP', 'HCO', 'MCO', 'RELATIONSHIP']) ||(exchange.in.headers.eventType in ['simple'] && exchange.in.headers.objectType in ['ENTITY'])) ALLALLprod-out-full-snowflake-allFlowsSnowflake participate in two flows:Snowflake: Events publish flowEvent publisher pushes all events regarding entity/relation change to Kafka topic that is created for Snowflake( {{$env}}-out-full-snowflake-all }} ). Then Kafka Connect component pulls those events and loads them to Snowflake table(Flat model).ReconciliationMain goal of reconciliation process is to synchronise Snowflake database with MongoDB.Snowflake periodically exports entities and creates csv file with their identifiers and checksums. The file is sent to S3 from where it is then downloaded in the reconciliation process. This process compares the data in the file with the values stored in Mongo.A reconciliation event is created and posted on kafka topic in two cases:the cheksum has changedthere is lack of entity in csv fileClient software  Kafka Connect is responsible for collecting kafka events and loading them to Snowflake database in flat model.SOPsCurrently there are no SOPs for snowflake."
},
{
"title": "Vaccine (GBLUS)",
"pageID": "164469863",
"pageLink": "/pages/viewpage.action?pageId=164469863",
"content": "ContactsVajapeyajula, Venkata Kalyan Ram <Kalyan.Vajapeyajula@COMPANY.com>BAVISHI, MONICA <MONICA.BAVISHI@COMPANY.com>Duvvuri, Satya <Satya.Duvvuri@COMPANY.com>Garg, Nalini <Nalini.Garg@COMPANY.com>Shah, Himanshu <Himanshu.Shah@COMPANY.com>FlowsFlowDescriptionSnowflake: Events publish flowEvents AUTO_LINK_FOUND and POTENTIAL_LINK_FOUND are published to snowflakeSnowflake: Base tables refreshMATCHES table is refreshed (every 2 hours in prod) with those eventsSnowflake MDMMATCHES table are read by an ETL process implemented by COMPANY Team ETL BatchesThe ETL process creates relations like  SAPtoHCOSAffiliations. FlextoDDDAffiliations, FlextoHCOSAffiliations through the Batch ChannelNotMatch CallbackFor created relations, the NotMatch callback is triggered and removes LINKS using NotMatch Reltio callsClient software Addtional clients links/software/description ACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicDerivedAffilations Batch Load userderivedaffiliations_loadN/AN/A- "CREATE_RELATION"- "UPDATE_RELATION"- US*"
},
{
"title": "ICUE (AMER)",
"pageID": "172301085",
"pageLink": "/pages/viewpage.action?pageId=172301085",
"content": "ContactsBrahma, Bagmita <Bagmita.Brahma2@COMPANY.com>Solanki, Hardik <Hardik.Solanki@COMPANY.com>Tikyani, Devesh <Devesh.Tikyani@COMPANY.com>GatewayAMERNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicICUE user (NPROD)icueExternal OAuth2ICUE-MDM_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","CREATE_MCO","UPDATE_MCO","GET_ENTITIES","LOOKUPS"]["US"]["ICUE"]consumer: regex: - "^.*-out-full-icue-all$" - "^.*-out-full-icue-grv-all$"groups: - icue_dev - icue_qa - icue_stage - dev_icue_grv - qa_icue_grv - stage_icue_grvICUE user (PROD)icueExternal OAuth2ICUE-MDM_client["CREATE_HCP","UPDATE_HCP","CREATE_HCO","UPDATE_HCO","CREATE_MCO","UPDATE_MCO","GET_ENTITIES","LOOKUPS"]["US"]["ICUE"]consumer: regex: - "^.*-out-full-icue-all$" - "^.*-out-full-icue-grv-all$"groups: - icue_prod - prod_icue_grvKafkaGBLUS (icue-grv-mule)NameKafka UsernameConsumergroupPublisher routing ruleTopicPartitionsicue - DEVicue_nprod"exchange.in.headers.eventType in ['full_not_trimmed'] && exchange.in.headers.objectType in ['HCP'] && ['GRV'].intersect(exchange.in.headers.eventSource) && !(['ICUE'].intersect(exchange.in.headers.eventSource)) && exchange.in.headers.eventSubtype in ['HCP_CREATED', 'HCP_CHANGED']"${local_env}-out-full-icue-grv-all"icue - QAicue_nprod${local_env}-out-full-icue-grv-allicue - STAGEicue_nprod${local_env}-out-full-icue-grv-allicue  - PRODicuex_prod${env}-out-full-icue-grv-allFlowsCreate/Update HCP/HCO/MCOGet EntityCreate RelationsClient software APIKafka connector"
},
{
"title": "ESAMPLES (GBLUS)",
"pageID": "172301089",
"pageLink": "/pages/viewpage.action?pageId=172301089",
"content": "ContactsAdhvaryu, Amish <Amish.Adhvaryu@COMPANY.com>Jain, Somya <Somya.Jain@COMPANY.com>Bablani, Vijay <Vijay.Bablani@COMPANY.com>Reynolds, Lori <Lori.Reynolds@COMPANY.com>ACLsNameGateway User NameAuthenticationPing Federate UserRolesCountriesSourcesTopicMuleSoft - esamples useresamplesOAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●- "GET_ENTITIES"USall_sourcesN/AFlowsGet EntityClient software API - read only"
},
{
"title": "VEEVA_FIELD (EMEA, AMER)",
"pageID": "172301091",
"pageLink": "/pages/viewpage.action?pageId=172301091",
"content": "ContactsAdhvaryu, Amish <Amish.Adhvaryu@COMPANY.com>Fani, Chris <Christopher.Fani@COMPANY.com>GatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicVEEVA_FIELD user (NPROD)veeva_fieldExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO","FR","GB","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","NO","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SV","SX","TF","TH","TN","TR","TT","TW","UA","UY","VE","VG","VN","WF","XX","YT"]GB["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY","CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP","GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH","KOL_OneView","KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK","ONEKEY","PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit","SAP","SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]N/AVEEVA_FIELD user (PROD)veeva_fieldExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","ES","FI","FO","FR","GB","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","NO","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SV","SX","TF","TH","TN","TR","TT","TW","UA","UY","VE","VG","VN","WF","XX","YT"]GB["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY","CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP","GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH","KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK","ONEKEY","PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit","SAP","SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]N/AAMERNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicVEEVA_FIELD   user (NPROD)veeva_fieldExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["CA", "US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]N/AExternal OAuth2(GBLUS-STAGE)55062bae02364c7598bc3ffbfe38e07bVEEVA_FIELD user (PROD)veeva_fieldExternal OAuth2 (ALL)67b77aa7ecf045539237af0dec890e59726b6d341f994412a998a3e32fdec17a["GET_ENTITIES","LOOKUPS"]["CA", "US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]N/AFlowsGet EntityClient software API - read only"
},
{
"title": "PFORCEOL (EMEA, AMER, APAC)",
"pageID": "172301093",
"pageLink": "/pages/viewpage.action?pageId=172301093",
"content": "ContactsAdhvaryu, Amish <Amish.Adhvaryu@COMPANY.com>Fani, Chris <Christopher.Fani@COMPANY.com>RequirementsPartial requirementsSent by Amish AdhvaryuPforceOL Dev - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●PforceOL Stage - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●PforceOL Prod - ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● PT RO DK BR IL TR GR NO CA JP MX AT AR RU KR DE PL AU HK IN MY PH SG TW TH ES CZ LT UA VN ID KZ HU SK UK SE FI CH SA EG MA ZA BE NL IT DZ CO NZ PE CL EE HR LV RS TN US CN SI FR BG IR WA PKNew Requirements - October 2024Action neededNeed Access to PFORCEOL - DEV, PFORCEOL - QA, PFORCEOL - STG, PFORCEOL - PRODPingFederate usernameDEV & QA: ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●STG: ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●PROD: ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●CountriesAC, AE, AG, AI, AR, AT, AU, AW, BB, BE, BH, BM, BR, BS, BZ, CA, CH, CN, CO, CR, CU, CW, CY, CZ, DE, DK, DM, DO, DZ, EG, ES, FI, FK, FR, GB, GD, GF, GP, GR, GT, GY, HK, HN, HT, ID, IE, IL, IN, IT, JM, JP, KN, KR, KW, KY, LC, LU, MF, MQ, MS, MX, MY, NI, NL, NO, NZ, OM, PA, PH, PL, PT, QA, RO, SA, SE, SG, SK, SR, SV, SX, TC, TH, TR, TT, TW, UE, UK, US, VC, VG, VN, YE, ZAAJ: "Keep the other countries for now"Full list:AC, AD, AE, AG, AI, AM, AN, AR, AT, AU, AW, BA, BB, BE, BG, BH, BL, BM, BO, BQ, BR, BS, BY, BZ, CA, CH, CL, CN, CO, CP, CR, CU, CW, CY, CZ, DE, DK, DM, DO, DZ, EC, EE, EG, ES, FI, FK, FO, FR, GB, GD, GF, GL, GP, GR, GT, GY, HK, HN, HR, HT, HU, ID, IE, IL, IN, IR, IT, JM, JP, KN, KR, KW, KY, KZ, LC, LT, LU, LV, MA, MC, MF, MQ, MS, MU, MX, MY, NC, NI, NL, NO, NZ, OM, PA, PE, PF, PH, PK, PL, PM, PN, PT, PY, QA, RE, RO, RS, RU, SA, SE, SG, SI, SK, SR, SV, SX, TC, TF, TH, TN, TR, TT, TW, UA, UE, UK, US, UY, VC, VE, VG, VN, WA, WF, XX, YE, YT, ZATenantAMER, EMEA, APAC, US, EX-USEnvironmentsDEV, QA, STG, PRODPermissions rangeRead access for HCP Search and HCO Search and MCO SearchSourcesSources that are configured in OneMed:MAPP, ONEKEY,OK, PFORCERX_ODS, PFORCERX, VOD, LEGACY_SFA_IDL, PTRS, JPDWH, iCUE, IQVIA_DDD, DCR_SYNC, MDE, MEDPAGESHCP, MEDPAGESHCOBusiness justificationThese changes are required as part of OneMed 2.0 Transformation Project. This project is responsible to ensure an improvised system due to which the proposed changes will help the OneMed technical team to build a better solution to search for HCP/HCO data within MDM system through API integration.Point of contactAnvesh (anveshvedula.chalapati@COMPANY.com), Aparna (aparna.balakrishna@COMPANY.com)Excel sheet with countries: GatewayEMEANameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicPFORCEOL user (NPROD)pforceolExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["NO","AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","EG","ES","FI","FO","FR","GB","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IR","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","false","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SV","SX","TF","TH","TN","TR","TT","TW","UA","UK","US","UY","VE","VG","VN","WA","WF","XX","YT","ZA"]GB["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY","CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP","GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH","KOL_OneView","KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK","ONEKEY","PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit","SAP","SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]N/APFORCEOL user (PROD)pforceolExternal OAuth2- ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["NO","AD","AG","AI","AM","AN","AR","AT","AU","AW","BA","BB","BE","BG","BL","BM","BO","BQ","BR","BS","BY","BZ","CA","CH","CL","CN","CO","CP","CR","CW","CY","CZ","DE","DK","DO","DZ","EC","EE","EG","ES","FI","FO","FR","GB","GF","GL","GP","GR","GT","GY","HK","HN","HR","HU","ID","IE","IL","IN","IR","IT","JM","JP","KR","KY","KZ","LC","LT","LU","LV","MA","MC","MF","MQ","MU","MX","MY","NC","NI","NL","false","NZ","PA","PE","PF","PH","PK","PL","PM","PN","PT","PY","RE","RO","RS","RU","SA","SE","SG","SI","SK","SV","SX","TF","TH","TN","TR","TT","TW","UA","UK","UY","VE","VG","VN","WA","WF","XX","YT","ZA"]GB["AHA","AMA","AMPCO","AMS","AOA","BIODOSE","BUPA","CH","CICR","CN3RDPARTY","CRMMI","CRMMI-SUR","CSL","DDD","DEA","DT_340b","ENGAGE","EVR","FACE","GCP","GRV","HCH","HCOS","HMS","HUB_CALLBACK","HUB_Callback","HUB_USAGETAG","IMSDDD","IMSPLAN","JPDWH","KOL_OneView","LLOYDS","LocalMDM","MAPP","MDE","MEDIC","NHS","NUCLEUS","OK","ONEKEY","PCMS","PFORCERX","PFORCERX_ID","PFORCERX_ODS","PTRS","RX_AUDIT","Reltio","ReltioCleanser","Rx_Audit","SAP","SYMP","VEEVA","VEEVA_AU","VEEVA_NZ","VEEVA_PHARMACY_AU","XPO"]N/AAMERNameGateway User NameAuthenticationPing Federate UserRolesCountriesDefaultCountrySourcesTopicPFORCEOL  user (NPROD)pforceolExternal OAuth2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●["GET_ENTITIES","LOOKUPS"]["CA", "US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]N/AExternal OAuth2(GBLUS-STAGE)223ca6b37aef4168afaa35aa2cf39a3ePFORCEOL user (PROD)pforceolExternal OAuth2 (ALL)e678c66c02c64b599b351e0ab02bae9fe6ece8da20284c6987ce3b8564fe9087["GET_ENTITIES","LOOKUPS"]["CA", "US"]["867","ANRO_DM","APUS_VVA","ASHVVA","BMS (NAV)","CENTRIS","CICR","CN3RDPARTY","COV","CRMMI","DDDV","DEA","DT_340b","DVA","EMDS_VVA","ENGAGE 1.0","ENGAGE","EVR","EXAS","FACE","GCP","GRV","HUB_CALLBACK","HUB_Callback","ICUE","INTEGRICHAIN_SHIP_TO","INTEGRICHAIN_TRADE_PARTNER","IQVIA_DDD","IQVIA_DDD_ZIP","IQVIA_MCO","IQVIA_PDRP","IQVIA_RAWDEA","IQVIA_RX","JPDWH","KFIS","KOL_OneView","LocalMDM","MAPP","MDE","MEDIC","MILLIMAN_MCO","MMIT","MM_C1st","MYOV_VVA","NUCLEUS","OK","ONEKEY","COMPANY_ACCTS","PFORCERX","POLARIS_DM","PTRS","Reltio","ReltioCleanser","Rx_Audit","SAP","SHS","SHS_MCO","SHS_RX"]N/AFlowsGet EntityClient software API - read only"
},
{
"title": "1CKOL (Global)",
"pageID": "184688633",
"pageLink": "/pages/viewpage.action?pageId=184688633",
"content": "Contacts:Kucherov, Aleksei <Aleksei.Kucherov@COMPANY.com>; Moshin, Nikolay <Nikolay.Moshin@COMPANY.com>Old Contacts:Data load support:First Name: IlyaLast Name: EnkovichOffice:  ●●●●●●●●●●●●●●●●●●Mob: ●●●●●●●●●●●●●●●●●●Internet: www.unit-systems.ruE-mail: enkovich.i.s@unit-systems.ruBackup contact:First Name: SergeyLast Name: PortnovOffice: ●●●●●●●●●●●●●●●●●●Mob: ●●●●●●●●●●●●●●●●●●Internet: www.unit-systems.ruE-mail: portnov.s.a@unit-systems.ruFlows1CKOL has one batch process which consumes export files from data warehouse, process this, and loads data to MDM. This process is base on incremental batch engine and run on Airflow platform.Input filesThe input files are delivered by 1CKOL to AWS S3 bucketMAPP Review - Europe - 1cKOL - All Documents (sharepoint.com)UATPRODS3 service accountsvc_gbicc_euw1_project_mdm_inbound_1ckol_rw_s3svc_gbicc_euw1_project_mdm_inbound_1ckol_rw_s3S3 Access key IDAKIATCTZXPPJXRNSDOGNAKIATCTZXPPJXRNSDOGNS3 Bucketpfe-baiaes-eu-w1-nprod-projectpfe-baiaes-eu-w1-projectS3 Foldermdm/UAT/inbound/KOL/RU/mdm/inbound/KOL/RU/Input data file mask KOL_Extract_Russia_[0-9]+.zipKOL_Extract_Russia_[0-9]+.zipCompressionzipzipFormatFlat files, 1CKOL dedicated format Flat files, 1CKOL dedicated format ExampleKOL_Extract_Russia_07212021.zipKOL_Extract_Russia_07212021.zipSchedulenonenoneAirflow job inc_batch_eu_kol_ru_stage inc_batch_eu_kol_ru_prod Data mapping Data mapping is described in the attached document.ConfigurationFlow configuration is stored in MDM Environment configuration repository. For each environment where the flow should be enabled the configuration file inc_batch_eu_kol_ru.yml has to be created in the location related to configured environment: inventory/[env name]/group_vars/gw-airflow-services/ and the batch name "inc_batch_eu_kol_ru" has to be added to "airflow_components" list which is defined in file inventory/[env name]/group_vars/gw-airflow-services/all.yml. Below table prresents the location of inc_batch_jp.yml file for Test, Dev, Mapp, Stage and PROD envs:inc_batch_eu_kol_ruUAThttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/stage/group_vars/gw-airflow-services/inc_batch_eu_kol_ru.ymlPRODhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/prod/group_vars/gw-airflow-services/inc_batch_eu_kol_ru.ymlApplying configuration changes is done by executing the deploy Airflow's components procedure.SOPsThere is no particular SOP procedure for this flow. All common SOPs was described in the "Incremental batch flows: SOP" chapter."
},
{
"title": "Snowflake MDM Data Mart",
"pageID": "164470197",
"pageLink": "/display/GMDM/Snowflake+MDM+Data+Mart",
"content": "The section describes   MDM Data Mart in Snowflake. The Data Mart contains MDM data from Reltio tenants published into Snowflake via MDM HUB.Roles, permissions, warehouses used in MDM Data Mart in Snowflake: NewMdmSfRoles_231017.xlsx"
},
{
"title": "Connect Guide",
"pageID": "196886695",
"pageLink": "/display/GMDM/Connect+Guide",
"content": "How to add a user to the DATA Role:  Users accessing snowflake have to create a ticket and add themselves to the DATA role. This will allow the user to view CUSTOMER_SL schema (users access layer to Snowflake):Go to https://requestmanager.COMPANY.com/Click on the TOP: "Group Manager" - https://requestmanager1.COMPANY.com/Group/Default.aspxClick on the "Distribution Lists"Search for the correct group you want to be added. Check the group name here: "List Of Groups With Access To The DataMart" In the search write the "AD Group Name" for selected SF Instance.Click Request AccessClick "Add Myself" and then save Go to "Cart" and click "Submit Request"How to connect to the DB:Go to the Environments view.Choose the Environments that you want to view:e.g. EMEA - EMEAChoose the NPROD or PROD environmentse.g - EMEA STAGE ServicesOn this page go to the Snowflake MDM DataMartClick on the DB Urle.g. - https://emeadev01.eu-west-1.privatelink.snowflakecomputing.comThe following page will open:Click "Sign in using COMPANY SSO"Open "New Worksheet"Choose:ROLE: WAREHOUSE:  COMM_MDM_DMART_WH                                          - this is based on the "Snowflake MDM DataMart" table - Default warehouse nameDATABASE:      COMM_<MARKET>_MDM_DMART_<ENV>_DB          - this is based on the "Snowflake MDM DataMart" table - DB NameSCHEMA:        CUSTOMER_SLList Of Groups With Access To The DataMartSince October 2023NewMdmSfRoles_231017 1.xlsx[Expired Oct 2023] Groups that have access to CUSTOMER_SL schema:Role NameSF InstanceDB InstanceEnvAD Group NameCOMM_AMER_MDM_DMART_DEV_DATA_ROLEAMERAMERDEVsfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_DEV_DATA_ROLECOMM_AMER_MDM_DMART_QA_DATA_ROLEAMERAMERQAsfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_QA_DATA_ROLECOMM_AMER_MDM_DMART_STG_DATA_ROLEAMERAMERSTAGEsfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_STG_DATA_ROLECOMM_AMER_MDM_DMART_PROD_DATA_ROLEAMERAMERPRODsfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_DATA_ROLECOMM_MDM_DMART_DEV_DATA_ROLEAMERUSDEVsfdb_us-east-1_amerdev01_COMM_DEV_MDM_DMART_DATA_ROLECOMM_MDM_DMART_QA_DATA_ROLEAMERUSQAsfdb_us-east-1_amerdev01_COMM_QA_MDM_DMART_DATA_ROLECOMM_MDM_DMART_STG_DATA_ROLEAMERUSSTAGEsfdb_us-east-1_amerdev01_COMM_STG_MDM_DMART_DATA_ROLECOMM_MDM_DMART_PROD_DATA_ROLEAMERUSPRODsfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_DATA_ROLECOMM_APAC_MDM_DMART_DEV_DATA_ROLEEMEAAPACDEVsfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_DEV_DATA_ROLECOMM_APAC_MDM_DMART_QA_DATA_ROLEEMEAAPACQAsfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_QA_DATA_ROLECOMM_APAC_MDM_DMART_STG_DATA_ROLEEMEAAPACSTAGEsfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_STG_DATA_ROLECOMM_APAC_MDM_DMART_PROD_DATA_ROLEEMEAAPACPRODsfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_DATA_ROLECOMM_EMEA_MDM_DMART_DEV_DATA_ROLEEMEAEMEADEVsfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_DEV_DATA_ROLECOMM_EMEA_MDM_DMART_QA_DATA_ROLEEMEAEMEAQAsfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_QA_DATA_ROLECOMM_EMEA_MDM_DMART_STG_DATA_ROLEEMEAEMEASTAGEsfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_STG_DATA_ROLECOMM_EMEA_MDM_DMART_PROD_DATA_ROLEEMEAEMEAPRODsfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_DATA_ROLECOMM_MDM_DMART_DEV_DATA_ROLEEMEAEUDEVsfdb_eu-west-1_emeadev01_COMM_DEV_MDM_DMART_DATA_ROLECOMM_MDM_DMART_QA_DATA_ROLEEMEAEUQAsfdb_eu-west-1_emeadev01_COMM_QA_MDM_DMART_DATA_ROLECOMM_MDM_DMART_STG_DATA_ROLEEMEAEUSTAGEsfdb_eu-west-1_emeadev01_COMM_STG_MDM_DMART_DATA_ROLECOMM_MDM_DMART_PROD_DATA_ROLEEMEAEUPRODsfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_DATA_ROLECOMM_GBL_MDM_DMART_DEV_DATA_ROLEEMEAGBLDEVsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_DEV_DATA_ROLECOMM_GBL_MDM_DMART_QA_DATA_ROLEEMEAGBLQAsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_QA_DATA_ROLECOMM_GBL_MDM_DMART_STG_DATA_ROLEEMEAGBLSTAGEsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_STG_DATA_ROLECOMM_GBL_MDM_DMART_PROD_DATA_ROLEEMEAGBLPRODsfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_DATA_ROLE"
},
{
"title": "Data model",
"pageID": "196886989",
"pageLink": "/display/GMDM/Data+model",
"content": "The data mart contains MDM data in object & relational data models. The fragment of the model is presented in the picture below. The object data model includes the latest version of Reltio JSON documents representing entities, relationships, lovs, merge-tree. They are loaded into  ENTITIES, RELATIONS, LOV_DATA, MERGES, MATCHES tables. They are loading from Reltio using a HUB streaming interface described here.The object model is transformed into the relation model by a set of dynamic views using Snowflake JSON processing query language. Dynamic views are generated dynamically from the Retlio data model. The regeneration process is maintained in Jenkins and triggered weekly or on-demand.  The generation process starts from root objects like HCP, HCO, walks through JSON tree and generates views with the following rules:  for simple attributes like first name,  a view column is generated in the current view.for nested attributes like addresses, a new view is generated, nested attribute uri and parent key from the parent view become primary key in the new viewfor lookup values like gender the lookup id is generatedModel versionsThere are two versions of Reltio data model maintained in the data mart:COMPANY Reltio data model - the current model maintained in all regional data marts that consume data from COMPANY Reltio regional instancesIqivia Reltio data model - legacy model from the first Reltio instance maintained in   EU regional data mart that consumes data from Global Legacy Reltio (ex-us)Key generation strategyObject model:ObjectsKey columnsDescriptionENTITIES, MATCHES MERGESentity_uri, country*Reltio entity unique identifier and countryRELATIONSrelation_uri, country*Reltio relationship unique identifier & countryLOV_DATAid, mdm_region*the concatenation of Reltio LOV name + ':'+ canonical code as id & mdm region  * - only in global data martRelational model:ObjectsKey columnsDescriptionroot objects like HCP, HCO, MCO, MERGE_HISTORY, MATCH_HISTORYentity_uri, country*Reltio entity unique identifier and countryAFFILIATIONSrelation_uri, country*Reltio relationship unique identifier and countrychild views for nested attributes Addresses, Specialties ...parent view keys, nested attribute uri, country* parent view keys + nested attribute uri  + country  * - only in global data martSchemas:MDM Data Mart contains the following schemas:Schema nameDescriptionLANDINGSchemas used by HUB ETL processes as stage areaCUSTOMERMain schema containing data mart data CUSTOMER_SLAccess schema to CUSTOMER schema dataAES_RS_SLContains views presenting data in Redshift data model"
},
{
"title": "AES_RS_SL",
"pageID": "203229895",
"pageLink": "/display/GMDM/AES_RS_SL",
"content": "The schema contains a set of views that mimic MDM DataMart from Redshift. The views integrate both data models COMPANY and IQIVIA and present data from all countries available in Reltio.Differences from original Redshift martTechnical ids in views keeping nested attributes values are different from Redshit ones. They are based on Reltio attribute uris instead of MDM checksum generated from attribute values.Foreign keys for code values to be joined with the dictionary table are also generated using a different strategy."
},
{
"title": "CUSTOMER schema",
"pageID": "163919161",
"pageLink": "/display/GMDM/CUSTOMER+schema",
"content": "This is the main schema containing MDM data in two formats.Object model that represents Reltio JSON format. Data in the format are kept in ENTITIES , RELATIONS, MERGE_TREE tables. Relation model is created as a part of views (standard or materialized) derived from the object model. Most of the views are generated in an automated way based on Reltio Data Model configuration. They directly reflect Relito object model. There are two sets of views as there are two models in Reltio: COMPANY and Iqivia,  Those views can change dynamically as Reltio config is updated.\n\n \n \n \n \n \n \n\n \n \n \n \n\n \n \n\n \n \n \n\n \n \n \n \n \n \n\n \n \n \n \n"
},
{
"title": "Customer base objects",
"pageID": "164470194",
"pageLink": "/display/GMDM/Customer+base+objects",
"content": "ENTITIESKeeps Relto entities objectsColumnTypeDescriptionENTITY_URITEXTReltio entityt uriCOUNTRYTEXTCountryENTITY_TYPETEXTEntity type for example: HCO, HCPACTIVEBOOLEANActive flag CREATE_TIMETIMESTAMP_LTZCreate timeUPDATE_TIMETIMESTAMP_LTZUpdate timeOBJECTVARIANTJSON objectLAST_EVENT_TYPETEXTThe last event updated the JSON objectLAST_EVENT_TIMETIMESTAMP_LTZLast event timePARENTTEXTParent entity uriCHECKSUMNUMBERChecksumCOMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global IdPARENT_COMPANY_GLOBAL_CUSTOMER_ITEXTIn case of lost merge that field store COMPANY Global Id of the winner entity else is emptyHIST_INACTIVE_ENTITIESUsed for history inactive onekey crosswals. Structure is a copy of entities table.ColumnTypeDescriptionENTITY_URITEXTReltio entityt uriCOUNTRYTEXTCountryENTITY_TYPETEXTEntity type for example: HCO, HCPACTIVEBOOLEANActive flag CREATE_TIMETIMESTAMP_LTZCreate timeUPDATE_TIMETIMESTAMP_LTZUpdate timeOBJECTVARIANTJSON objectLAST_EVENT_TYPETEXTThe last event updated the JSON objectLAST_EVENT_TIMETIMESTAMP_LTZLast event timePARENTTEXTParent entity uriCHECKSUMNUMBERChecksumCOMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global IdPARENT_COMPANY_GLOBAL_CUSTOMER_ITEXTIn case of lost merge that field store COMPANY Global Id of the winner entity else is emptyRELATIONSKeeps Relto relations objectsColumnTypeDescriptionRELATION_URITEXTReltio relation uriCOUNTRYTEXTCountryRELATION_TYPETEXTRelation typeACTIVEBOOLEANActive flagCREATE_TIMETIMESTAMP_LTZCreate timeUPDATE_TIMETIMESTAMP_LTZUpdate timeSTART_ENTITY_URITEXTSource entity uri END_ENTITY_URITEXTTarget entity uriOBJECTVARIANTJSON object LAST_EVENT_TYPETEXTThe last event type modified the recordLAST_EVENT_TIMETIMESTAMP_LTZLast event timePARENTTEXTnot usedCHECKSUMNUMBERChecksumMATCHESThe table presents active and historical matches found in Reltio for all entities.ColumnTypeDescriptionENTITY_URITEXTReltio entity uriTARGET_ENTITY_URITEXTReltio entity uri to which matches ENTITY_URIMATCH_TYPETEXTMatch typeMATCH_RULE_NAMETEXTMatch rule nameCOUNTRYTEXTCountryLAST_EVENT_TYPETEXTThe last event type modified the recordLAST_EVENT_TIMETIMESTAMP_LTZLast event timeLAST_EVENT_CHECKSUMNUMBERThe last event checksumACTIVEBOOLEANActive flagMATCH_HISTORYThe view shows match history for active and inactive matches enriched by merge data. The merge info is available for matches that were inactivated by the merge action triggered by users or Reltio background processes.  ColumnTypeDescriptionENTITY_URITEXTReltio entity uriTARGET_ENTITY_URITEXTReltio entity uri to which matches ENTITY_URIMATCH_TYPETEXTMatch typeMATCH_RULE_NAMETEXTMatch rule nameCOUNTRYTEXTCountryLAST_EVENT_TYPETEXTThe last event type modified the recordLAST_EVENT_TIMETIMESTAMP_LTZLast event timeLAST_EVENT_CHECKSUMNUMBERThe last event checksumACTIVEBOOLEANActive flagMERGEDBOOLEANMerge indicator, the true value indicates that the merge happened for the match.MERGE_REASONTEXT Merge reason MERGE_USERTEXTReltio user name or process name that executed the mergeMERGE_DATETO_TIMESTAMP_LTZMerge date MERGE_RULETEXTMerge rule that triggered the mergeMERGESThe table presents active merges found in Reltio based on the merge_tree export.ColumnTypeDescriptionENTITY_URITEXTReltio entity uriLAST_UPDATE_TIMETO_TIMESTAMP_LTZDate of the last update on the selected rowCREATE_TIMETO_TIMESTAMP_LTZCreation date on the selected rowOBJECTVARIANTJSON object MERGE_HISTORYThe view shows merge history for active entities. The merge history view is build based on the merge_tree Reltio export. ColumnTypeDescriptionENTITY_URITEXTReltio entity uriLOSER_ENTITY_URITEXTReltio entity uri for the merge loserMERGE_REASONTEXT Merge reason Merge on the flyThis indicates automatic match rules were able to find matches for a newly added entity. Therefore, the new entity was not created as a separate entity in the platform but was merged into an existing one instead.Merge by crosswalksIf a newly added entity has the same crosswalk as that of an existing entity in the platform, such entities are merged automatically on the fly because the Reltio platform does not allow multiple entities with the same crosswalk.Automatic merge by crosswalksSometimes, two entities with the same crosswalk may exist in the platform (simultaneously added entities). In this case, such entities are merged automatically using a special background thread.Group merge (Matches found on object creation)This indicates that several entities are grouped into one merge request because all such entities will be merged at the same time to create a single entity in the platform. The reason for a group merge can be an automatic match rule or same crosswalk or both.Merges found by background merge processThe background match thread (incremental match processor) modifies entities as a result of create/change/remove events and performs a rematch. During the rematch, if some entities match using the automatic match rules, such entities are merged.Merge by handThis is a merge performed by a user through the API or from the UI by going through the potential matches.MERGE_RULETEXTMerge rule that triggered the mergeUSERTEXTUser name which executed the mergeMERGE_DATETO_TIMESTAMP_LTZMerge date ENTITY_HISTORYKeeps event history for entities and relationsColumnTypeDescriptionEVENT_KEYTEXTEvent keyEVENT_PARTITIONNUMBERPartition number in KafkaEVENT_OFFSETNUMBEROffset in KafkaEVENT_TOPICTEXTName of the topic in Kafka where this event is storedEVENT_TIMETIMESTAMP_LTZTimestamp when the event was generatedEVENT_TYPETEXTEvent typeCOUNTRYTEXTCountryENTITY_URITEXTReltio entity uriCHECKSUMNUMBERChecksumLOV_DATAKeeps LOV objectsColumnTypeDescriptionIDTEXTLOV identifier OBJECTVARIANTReltio RDM object in JSON formatCODESColumnTypeDescriptionSOURCETEXTSource MDM system nameCODE_IDTEXTCode id - generated by concatenated LOV name and canonical codeCANONICAL_CODETEXTCanonical codeLOV_NAMETEXTLOV (Dictionary) nameACTIVEBOOLEANActive flagDESCTEXTEnglish descriptionCOUNTRYTEXTCode countryPARENTSTEXTParent code idCODE_TRANSLATIONSRDM code translationsColumnTypeDescriptionSOURCETEXTSource MDM system nameCODE_IDTEXTCode idCANONICAL_CODETEXTCanonical codeLOV_NAMETEXTLOV (Dictionary) nameACTIVEBOOLEANActive flagLANG_CODETEXTLanguage codeLAND_DESCTEXTLanguage descriptionCOUNTRYTEXTCountryCODE_SOURCE_MAPPINGSSource code mappings to canonical codes in Reltio RDMColumnTypeDescriptionSOURCETEXTSource MDM system nameCODE_IDTEXTCode idSOURCE_NAMETEXTSource nameSOURCE_CODETEXTSource codeACTIVEBOOLEANActve flag (true - active, false - inactive)IS_CANONICALBOOLEANIs canonicalCOUNTRYTEXTCountryLAST_MODIFIEDTIMESTAMP_LTZLast modified datePARENTTEXTParent codeENTITY_CROSSWALKSKeeps entity crosswalksColumnTypeDescriptionCROSSWALK_URITEXTCrosswalk uriENTITY_URITEXTEntity uriENTITY_TYPETEXTEntity typeACTIVEBOOLEANActive flagTYPETEXTCrosswalk typeVALUETEXTCrosswalk valueSOURCE_TABLETEXTSource tableCREATE_DATETIMESTAMP_NTZCreate dateUPDATE_DATETIMESTAMP_NTZUpdate dateRELTIO_LOAD_DATETIMESTAMP_NTZDate when this crosswalk was loaded to ReltioDELETE_DATETIMESTAMP_NTZDelete dateCOMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global IdRELATION_CROSSWALKSKeeps relations crosswalksColumnTypeDescriptionCROSSWALK_URITEXTCrosswalk URIRELATION_URITEXTRelation URIRELATION_TYPETEXTRelation typeACTIVEBOOLEANActive flagTYPETEXTCrosswalk typeVALUETEXTCrosswalk valueSOURCE_TABLETEXTSource tableCREATE_DATETIMESTAMP_NTZCreate dateUPDATE_DATETIMESTAMP_NTZUpdate dateDELETE_DATETIMESTAMP_NTZDelete dateRELTIO_LOAD_DATETIMESTAMP_NTZDate when this relation was loaded to ReltioATTRIBUTE_SOURCEPresents information about what crosswalk provided the given attribute. The view can be joined with views for nested attributes to get also attribute values.ColumnTypeDescriptionATTTRIBUTE_URITEXTAttribute URIENTITY_URTEXTEntity URIACTIVEBOOLEANIs entity activeTYPETEXTCrosswalk typeVALUETEXTCrosswalk valueSOURCE_TABLETEXTCrosswalk source tableENTITY_UPDATE_DATESPresents information about updated dates of entities in Reltio MDM or SnowflakeThe view can be used to query updated records in a period of time including root objects like HCP, HCO, MCO, and child objects like IDENTIFIERS, SPECIALTIES, ADDRESSED etc.ColumnTypeDescriptionENTITY_URITEXTEntity URIACTIVEBOOLEANIs entity activeENTITY_TYPETEXTType of entityCOUNTRYTEXTCountry iso codeMDM_CREATE_TIMETIMESTAMP_LTZEntity create time in ReltioMDM_UPDATE_TIMETIMESAMP_LTZEntity update time in ReltioSF_CREATE_TIMETIMESTAMP_LTZEntity create time in Snowflake DBSF_UPDATE_TIMETIMESTAMP_LTZEntity last update time in SnowflakeLAST_EVENT_TIMETIMESTAMP_LTZLast KAFKA event timestampCHECKSUMNUMBERChecksumCOMPANY_GLOBAL_CUSTOMER_IDTEXTEntity COMPANY Global IdPARENT_COMPANY_GLOBAL_CUSTOMER_ITEXTIn case of lost merge that field store COMPANY Global Id of the winner entity else is emptyRELATION_UPDATE_DATESPresents information about updated dates of relations Reltio MDM or SnowflakeThe view can be used to query all updated entries in a period of time from  AFFILIATONS and child objects like AFFIL_RELATION_TYPEColumnTypeDescriptionRELATION_URITEXTEntity URIACTIVEBOOLEANIs entity activeRELATION_TYPETEXTType of entityCOUNTRYTEXTCountry iso codeMDM_CREATE_TIMETIMESTAMP_LTZRelation create time in ReltioMDM_UPDATE_TIMETIMESAMP_LTZRelation update time in ReltioSF_CREATE_TIMETIMESTAMP_LTZRelation create time in Snowflake DBSF_UPDATE_TIMETIMESTAMP_LTZRelation last update time in SnowflakeLAST_EVENT_TIMETIMESTAMP_LTZLast KAFKA event timestampCHECKSUMNUMBERChecksum"
},
{
"title": "Data Materialization Process",
"pageID": "347657026",
"pageLink": "/display/GMDM/Data+Materialization+Process",
"content": ""
},
{
"title": "Dynamic views for IQVIA MDM Model",
"pageID": "164470213",
"pageLink": "/display/GMDM/Dynamic+views++for+IQVIA+MDM+Model",
"content": "HCPHealth care providerReltio URI: configuration/entityTypes/HCPMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeFIRST_NAMEVARCHARFirst Nameconfiguration/entityTypes/HCP/attributes/FirstNameLAST_NAMEVARCHARLast Nameconfiguration/entityTypes/HCP/attributes/LastNameMIDDLE_NAMEVARCHARMiddle Nameconfiguration/entityTypes/HCP/attributes/MiddleNameNAMEVARCHARNameconfiguration/entityTypes/HCP/attributes/NamePREFIXVARCHARconfiguration/entityTypes/HCP/attributes/PrefixLKUP_IMS_PREFIXSUFFIX_NAMEVARCHARGeneration Suffixconfiguration/entityTypes/HCP/attributes/SuffixNameLKUP_IMS_SUFFIXPREFERRED_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/PreferredNameNICKNAMEVARCHARconfiguration/entityTypes/HCP/attributes/NicknameCOUNTRY_CODEVARCHARCountry Codeconfiguration/entityTypes/HCP/attributes/CountryLKUP_IMS_COUNTRY_CODEGENDERVARCHARconfiguration/entityTypes/HCP/attributes/GenderLKUP_IMS_GENDERTYPE_CODEVARCHARType codeconfiguration/entityTypes/HCP/attributes/TypeCodeLKUP_IMS_HCP_CUST_TYPEACCOUNT_TYPEVARCHARAccount Typeconfiguration/entityTypes/HCP/attributes/AccountTypeSUB_TYPE_CODEVARCHARSub type codeconfiguration/entityTypes/HCP/attributes/SubTypeCodeLKUP_IMS_HCP_SUBTYPETITLEVARCHARconfiguration/entityTypes/HCP/attributes/TitleLKUP_IMS_PROF_TITLEINITIALSVARCHARInitialsconfiguration/entityTypes/HCP/attributes/InitialsD_O_BDATEDate of Birthconfiguration/entityTypes/HCP/attributes/DoBY_O_BVARCHARBirth Yearconfiguration/entityTypes/HCP/attributes/YoBMAPP_HCP_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/MAPPHcpStatusLKUP_MAPP_HCPSTATUSGO_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/GOStatusLKUP_GOVOFF_GOSTATUSPIGO_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/PIGOStatusLKUP_GOVOFF_PIGOSTATUSNIPPIGO_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/NIPPIGOStatusLKUP_GOVOFF_NIPPIGOSTATUSPRIMARY_PIGO_RATIONALEVARCHARconfiguration/entityTypes/HCP/attributes/PrimaryPIGORationaleLKUP_GOVOFF_PIGORATIONALESECONDARY_PIGO_RATIONALEVARCHARconfiguration/entityTypes/HCP/attributes/SecondaryPIGORationaleLKUP_GOVOFF_PIGORATIONALEPIGOSME_REVIEWVARCHARconfiguration/entityTypes/HCP/attributes/PIGOSMEReviewLKUP_GOVOFF_PIGOSMEREVIEWGSQ_DATEDATEGSQDateconfiguration/entityTypes/HCP/attributes/GSQDateMAPP_DO_NOT_USEVARCHARconfiguration/entityTypes/HCP/attributes/MAPPDoNotUseLKUP_GOVOFF_DONOTUSEMAPP_CHANGE_DATEVARCHARconfiguration/entityTypes/HCP/attributes/MAPPChangeDateMAPP_CHANGE_REASONVARCHARconfiguration/entityTypes/HCP/attributes/MAPPChangeReasonIS_EMPLOYEEBOOLEANconfiguration/entityTypes/HCP/attributes/IsEmployeeVALIDATION_STATUSVARCHARValidation Status of the Customerconfiguration/entityTypes/HCP/attributes/ValidationStatusLKUP_IMS_VAL_STATUSSOURCE_CHANGE_DATEDATESourceChangeDateconfiguration/entityTypes/HCP/attributes/SourceChangeDateSOURCE_CHANGE_REASONVARCHARSourceChangeReasonconfiguration/entityTypes/HCP/attributes/SourceChangeReasonORIGIN_SOURCEVARCHAROriginating Sourceconfiguration/entityTypes/HCP/attributes/OriginSourceOK_VR_TRIGGERVARCHARconfiguration/entityTypes/HCP/attributes/OK_VR_TriggerLKUP_IMS_SEND_FOR_VALIDATIONBIRTH_CITYVARCHARBirth Cityconfiguration/entityTypes/HCP/attributes/BirthCityBIRTH_STATEVARCHARBirth Stateconfiguration/entityTypes/HCP/attributes/BirthStateSTATE_CODEBIRTH_COUNTRYVARCHARBirth Countryconfiguration/entityTypes/HCP/attributes/BirthCountryCOUNTRY_CDD_O_DDATEconfiguration/entityTypes/HCP/attributes/DoDY_O_DVARCHARconfiguration/entityTypes/HCP/attributes/YoDTAX_IDVARCHARconfiguration/entityTypes/HCP/attributes/TaxIDSSN_LAST4VARCHARconfiguration/entityTypes/HCP/attributes/SSNLast4MEVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/MENPIVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/NPIUPINVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/UPINKAISER_PROVIDERBOOLEANconfiguration/entityTypes/HCP/attributes/KaiserProviderMAJOR_PROFESSIONAL_ACTIVITYVARCHARconfiguration/entityTypes/HCP/attributes/MajorProfessionalActivityMPA_CDPRESENT_EMPLOYMENTVARCHARconfiguration/entityTypes/HCP/attributes/PresentEmploymentPE_CDTYPE_OF_PRACTICEVARCHARconfiguration/entityTypes/HCP/attributes/TypeOfPracticeTOP_CDSOLOBOOLEANconfiguration/entityTypes/HCP/attributes/SoloGROUPBOOLEANconfiguration/entityTypes/HCP/attributes/GroupADMINISTRATORBOOLEANconfiguration/entityTypes/HCP/attributes/AdministratorRESEARCHBOOLEANconfiguration/entityTypes/HCP/attributes/ResearchCLINICAL_TRIALSBOOLEANconfiguration/entityTypes/HCP/attributes/ClinicalTrialsWEBSITE_URLVARCHARconfiguration/entityTypes/HCP/attributes/WebsiteURLIMAGE_LINKSVARCHARconfiguration/entityTypes/HCP/attributes/ImageLinksDOCUMENT_LINKSVARCHARconfiguration/entityTypes/HCP/attributes/DocumentLinksVIDEO_LINKSVARCHARconfiguration/entityTypes/HCP/attributes/VideoLinksDESCRIPTIONVARCHARconfiguration/entityTypes/HCP/attributes/DescriptionCREDENTIALSVARCHARconfiguration/entityTypes/HCP/attributes/CredentialsCREDFORMER_FIRST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/FormerFirstNameFORMER_LAST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/FormerLastNameFORMER_MIDDLE_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/FormerMiddleNameFORMER_SUFFIX_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/FormerSuffixNameSSNVARCHARconfiguration/entityTypes/HCP/attributes/SSNPRESUMED_DEADBOOLEANconfiguration/entityTypes/HCP/attributes/PresumedDeadDEA_BUSINESS_ACTIVITYVARCHARconfiguration/entityTypes/HCP/attributes/DEABusinessActivitySTATUS_IMSVARCHARconfiguration/entityTypes/HCP/attributes/StatusIMSLKUP_IMS_STATUSSTATUS_UPDATE_DATEDATEconfiguration/entityTypes/HCP/attributes/StatusUpdateDateSTATUS_REASON_CODEVARCHARconfiguration/entityTypes/HCP/attributes/StatusReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODECOMMENTERSVARCHARCommentersconfiguration/entityTypes/HCP/attributes/CommentersSOURCE_CREATION_DATEDATEconfiguration/entityTypes/HCP/attributes/SourceCreationDateSOURCE_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/SourceNameSUB_SOURCE_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/SubSourceNameEXCLUDE_FROM_MATCHVARCHARconfiguration/entityTypes/HCP/attributes/ExcludeFromMatchPROVIDER_IDENTIFIER_TYPEVARCHARProvider Identifier Typeconfiguration/entityTypes/HCP/attributes/ProviderIdentifierTypeLKUP_IMS_PROVIDER_IDENTIFIER_TYPECATEGORYVARCHARCategory Codeconfiguration/entityTypes/HCP/attributes/CategoryLKUP_IMS_HCP_CATEGORYDEGREE_CODEVARCHARDegree Codeconfiguration/entityTypes/HCP/attributes/DegreeCodeLKUP_IMS_DEGREESALUTATION_NAMEVARCHARSalutation Nameconfiguration/entityTypes/HCP/attributes/SalutationNameIS_BLACK_LISTEDBOOLEANIndicates to Blacklist the profileconfiguration/entityTypes/HCP/attributes/IsBlackListedTRAINING_HOSPITALVARCHARTraining Hospitalconfiguration/entityTypes/HCP/attributes/TrainingHospitalACRONYM_NAMEVARCHARAcronymNameconfiguration/entityTypes/HCP/attributes/AcronymNameFIRST_SET_DATEDATEDate of 1st Installationconfiguration/entityTypes/HCP/attributes/FirstSetDateCREATE_DATEDATEIndividual Creation Dateconfiguration/entityTypes/HCP/attributes/CreateDateUPDATE_DATEDATEDate of Last Individual Updateconfiguration/entityTypes/HCP/attributes/UpdateDateCHECK_DATEDATEDate of Last Individual Quality Checkconfiguration/entityTypes/HCP/attributes/CheckDateSTATE_CODEVARCHARSituation of the healthcare professional (ex. Active, Inactive, Retired)configuration/entityTypes/HCP/attributes/StateCodeLKUP_IMS_PROFILE_STATESTATE_DATEDATEDate when state of the record was last modified.configuration/entityTypes/HCP/attributes/StateDateVALIDATION_CHANGE_REASONVARCHARReason for Validation Status changeconfiguration/entityTypes/HCP/attributes/ValidationChangeReasonLKUP_IMS_VAL_STATUS_CHANGE_REASONVALIDATION_CHANGE_DATEDATEDate of Validation changeconfiguration/entityTypes/HCP/attributes/ValidationChangeDateAPPOINTMENT_REQUIREDBOOLEANIndicates whether sales reps need to make an appointment to see the Professional.configuration/entityTypes/HCP/attributes/AppointmentRequiredNHS_STATUSVARCHARNational Health System Statusconfiguration/entityTypes/HCP/attributes/NHSStatusLKUP_IMS_SECTOR_OF_CARENUM_OF_PATIENTSVARCHARNumber of attached patientsconfiguration/entityTypes/HCP/attributes/NumOfPatientsPRACTICE_SIZEVARCHARPractice Sizeconfiguration/entityTypes/HCP/attributes/PracticeSizePATIENTS_X_DAYVARCHARPatients Per Dayconfiguration/entityTypes/HCP/attributes/PatientsXDayPREFERRED_LANGUAGEVARCHARPreferred Spoken Languageconfiguration/entityTypes/HCP/attributes/PreferredLanguagePOLITICAL_AFFILIATIONVARCHARPolitical Affiliationconfiguration/entityTypes/HCP/attributes/PoliticalAffiliationLKUP_IMS_POL_AFFILPRESCRIBING_LEVELVARCHARPrescribing Levelconfiguration/entityTypes/HCP/attributes/PrescribingLevelLKUP_IMS_PRES_LEVELEXTERNAL_RATINGVARCHARExternal Ratingconfiguration/entityTypes/HCP/attributes/ExternalRatingTARGETING_CLASSIFICATIONVARCHARTargeting Classificationconfiguration/entityTypes/HCP/attributes/TargetingClassificationKOL_TITLEVARCHARKey Opinion Leader Titleconfiguration/entityTypes/HCP/attributes/KOLTitleSAMPLING_STATUSVARCHARSampling Status of HCPconfiguration/entityTypes/HCP/attributes/SamplingStatusLKUP_IMS_SAMPLING_STATUSADMINISTRATIVE_NAMEVARCHARAdministrative Nameconfiguration/entityTypes/HCP/attributes/AdministrativeNamePROFESSIONAL_DESIGNATIONVARCHARconfiguration/entityTypes/HCP/attributes/ProfessionalDesignationLKUP_IMS_PROF_DESIGNATIONEXTERNAL_INFORMATION_URLVARCHARconfiguration/entityTypes/HCP/attributes/ExternalInformationURLMATCH_STATUS_CODEVARCHARconfiguration/entityTypes/HCP/attributes/MatchStatusCodeLKUP_IMS_MATCH_STATUS_CODESUBSCRIPTION_FLAG1BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag1SUBSCRIPTION_FLAG2BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag2SUBSCRIPTION_FLAG3BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag3SUBSCRIPTION_FLAG4BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag4SUBSCRIPTION_FLAG5BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag5SUBSCRIPTION_FLAG6BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag6SUBSCRIPTION_FLAG7BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag7SUBSCRIPTION_FLAG8BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag8SUBSCRIPTION_FLAG9BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag9SUBSCRIPTION_FLAG10BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCP/attributes/SubscriptionFlag10MIDDLE_INITIALVARCHARMiddle Initial. This attribute is populated from Middle Nameconfiguration/entityTypes/HCP/attributes/MiddleInitialDELETE_ENTITYBOOLEANProperty for GDPR removingconfiguration/entityTypes/HCP/attributes/DeleteEntityPARTY_IDVARCHARconfiguration/entityTypes/HCP/attributes/PartyIDLAST_VERIFICATION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/LastVerificationStatusLAST_VERIFICATION_DATEDATEconfiguration/entityTypes/HCP/attributes/LastVerificationDateEFFECTIVE_DATEDATEconfiguration/entityTypes/HCP/attributes/EffectiveDateEND_DATEDATEconfiguration/entityTypes/HCP/attributes/EndDatePARTY_LOCALIZATION_CODEVARCHARconfiguration/entityTypes/HCP/attributes/PartyLocalizationCodeMATCH_PARTY_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/MatchPartyNameLICENSEReltio URI: configuration/entityTypes/HCP/attributes/LicenseMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameLICENSE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCATEGORYVARCHARconfiguration/entityTypes/HCP/attributes/License/attributes/CategoryLKUP_IMS_LIC_CATEGORYNUMBERVARCHARState License INTEGER. A unique license INTEGER is listed for each license the physician holds. There is no standard format syntax. Format examples: 18986, 4301079019, BX1464089. There is also no limit to the INTEGER of licenses a physician can hold in a state. Example: A physician can have an inactive resident license plus unlimited active licenses. Residents can have as many as four licenses since some states issue licenses every yearconfiguration/entityTypes/HCP/attributes/License/attributes/NumberBOARD_EXTERNAL_IDVARCHARBoard External IDconfiguration/entityTypes/HCP/attributes/License/attributes/BoardExternalIDBOARD_CODEVARCHARState License Board Code. For AMA The board code will always be AMAconfiguration/entityTypes/HCP/attributes/License/attributes/BoardCodeSTLIC_BRD_CD_LOVSTATEVARCHARState License State. Two character field. USPS standard abbreviations.configuration/entityTypes/HCP/attributes/License/attributes/StateLKUP_IMS_STATE_CODEISO_COUNTRY_CODEVARCHARISO country codeconfiguration/entityTypes/HCP/attributes/License/attributes/ISOCountryCodeLKUP_IMS_COUNTRY_CODEDEGREEVARCHARState License Degree. A physician may hold more than one license in a given state. However, not more than one MD or more than one DO license in the same state.configuration/entityTypes/HCP/attributes/License/attributes/DegreeLKUP_IMS_DEGREEAUTHORIZATION_STATUSVARCHARAuthorization Statusconfiguration/entityTypes/HCP/attributes/License/attributes/AuthorizationStatusLKUP_IMS_IDENTIFIER_STATUSLICENSE_NUMBER_KEYVARCHARState License Number Keyconfiguration/entityTypes/HCP/attributes/License/attributes/LicenseNumberKeyAUTHORITY_NAMEVARCHARAuthority Nameconfiguration/entityTypes/HCP/attributes/License/attributes/AuthorityNamePROFESSION_CODEVARCHARProfessionconfiguration/entityTypes/HCP/attributes/License/attributes/ProfessionCodeLKUP_IMS_PROFESSIONTYPE_IDVARCHARAuthorization Type idconfiguration/entityTypes/HCP/attributes/License/attributes/TypeIdTYPEVARCHARState License Type. U = Unlimited there is no restriction on the physician to practice medicine; L = Limited implies restrictions of some sort. For example, the physician may practice only in a given county, admit patients only to particular hospitals, or practice under the supervision of a physician with a license in state or private hospitals or other settings; T = Temporary issued to a physician temporarily practicing in an underserved area outside his/her state of licensure. Also granted between board meetings when new licenses are issued. Time span for a temporary license varies from state to state. Temporary licenses typically expire 6-9 months from the date they are issued; R = Resident License granted to a physician in graduate medical education (e.g., residency training).configuration/entityTypes/HCP/attributes/License/attributes/TypeLKUP_IMS_LICENSE_TYPEPRIVILEGE_IDVARCHARLicense Privilegeconfiguration/entityTypes/HCP/attributes/License/attributes/PrivilegeIdPRIVILEGE_NAMEVARCHARLicense Privilege Nameconfiguration/entityTypes/HCP/attributes/License/attributes/PrivilegeNamePRIVILEGE_RANKVARCHARLicense Privilege Rankconfiguration/entityTypes/HCP/attributes/License/attributes/PrivilegeRankSTATUSVARCHARState License Status. A = Active. Physician is licensed to practice within the state; I = Inactive. If the physician has not reregistered a state license OR if the license has been suspended or revoked by the state board; X = unknown. If the state has not provided current information Note: Some state boards issue inactive licenses to physicians who want to maintain licensure in the state although they are currently practicing in another state.configuration/entityTypes/HCP/attributes/License/attributes/StatusLKUP_IMS_IDENTIFIER_STATUSDEACTIVATION_REASON_CODEVARCHARDeactivation Reason Codeconfiguration/entityTypes/HCP/attributes/License/attributes/DeactivationReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODEEXPIRATION_DATEDATEconfiguration/entityTypes/HCP/attributes/License/attributes/ExpirationDateISSUE_DATEDATEState License Issue Dateconfiguration/entityTypes/HCP/attributes/License/attributes/IssueDateBRD_DATEDATEState License as of date or pull date. The as of date (or stamp date) is the date the current license file is provided to the Database Licensees.configuration/entityTypes/HCP/attributes/License/attributes/BrdDateSAMPLE_ELIGIBILITYVARCHARconfiguration/entityTypes/HCP/attributes/License/attributes/SampleEligibilitySOURCE_CDVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/License/attributes/SourceCDRANKVARCHARLicense Rankconfiguration/entityTypes/HCP/attributes/License/attributes/RankCERTIFICATIONVARCHARCertificationconfiguration/entityTypes/HCP/attributes/License/attributes/CertificationREQ_SAMPL_NON_CTRLVARCHARRequest Samples Non-Controlledconfiguration/entityTypes/HCP/attributes/License/attributes/ReqSamplNonCtrlREQ_SAMPL_CTRLVARCHARRequest Samples Controlledconfiguration/entityTypes/HCP/attributes/License/attributes/ReqSamplCtrlRECV_SAMPL_NON_CTRLVARCHARReceives Samples Non-Controlled Substancesconfiguration/entityTypes/HCP/attributes/License/attributes/RecvSamplNonCtrlRECV_SAMPL_CTRLVARCHARReceives Samples Controlledconfiguration/entityTypes/HCP/attributes/License/attributes/RecvSamplCtrlDISTR_SAMPL_NON_CTRLVARCHARDistribute Samples Non-Controlled Substancesconfiguration/entityTypes/HCP/attributes/License/attributes/DistrSamplNonCtrlDISTR_SAMPL_CTRLVARCHARDistribute Samples Controlledconfiguration/entityTypes/HCP/attributes/License/attributes/DistrSamplCtrlSAMP_DRUG_SCHED_I_FLAGVARCHARSample Drug Schedule I flagconfiguration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedIFlagSAMP_DRUG_SCHED_II_FLAGVARCHARSample Drug Schedule II flagconfiguration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedIIFlagSAMP_DRUG_SCHED_III_FLAGVARCHARSample Drug Schedule III flagconfiguration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedIIIFlagSAMP_DRUG_SCHED_IV_FLAGVARCHARSample Drug Schedule IV flagconfiguration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedIVFlagSAMP_DRUG_SCHED_V_FLAGVARCHARSample Drug Schedule V flagconfiguration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedVFlagSAMP_DRUG_SCHED_VI_FLAGVARCHARSample Drug Schedule VI flagconfiguration/entityTypes/HCP/attributes/License/attributes/SampDrugSchedVIFlagPRESCR_NON_CTRL_FLAGVARCHARPrescribe Non-controlled flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrNonCtrlFlagPRESCR_APP_REQ_NON_CTRL_FLAGVARCHARPrescribe Application Request for Non-controlled Substances Flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrAppReqNonCtrlFlagPRESCR_CTRL_FLAGVARCHARPrescribe Controlled flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrCtrlFlagPRESCR_APP_REQ_CTRL_FLAGVARCHARPrescribe Application Request for Controlled Substances Flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrAppReqCtrlFlagPRESCR_DRUG_SCHED_I_FLAGVARCHARPrescrDrugSchedIFlagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedIFlagPRESCR_DRUG_SCHED_II_FLAGVARCHARPrescribe Schedule II Flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedIIFlagPRESCR_DRUG_SCHED_III_FLAGVARCHARPrescribe Schedule III Flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedIIIFlagPRESCR_DRUG_SCHED_IV_FLAGVARCHARPrescribe Schedule IV Flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedIVFlagPRESCR_DRUG_SCHED_V_FLAGVARCHARPrescribe Schedule V Flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedVFlagPRESCR_DRUG_SCHED_VI_FLAGVARCHARPrescribe Schedule VI Flagconfiguration/entityTypes/HCP/attributes/License/attributes/PrescrDrugSchedVIFlagSUPERVISORY_REL_CD_NON_CTRLVARCHARSupervisory Relationship for Non-Controlled Substancesconfiguration/entityTypes/HCP/attributes/License/attributes/SupervisoryRelCdNonCtrlSUPERVISORY_REL_CD_CTRLVARCHARSupervisoryRelCdCtrlconfiguration/entityTypes/HCP/attributes/License/attributes/SupervisoryRelCdCtrlCOLLABORATIVE_NONCTRLVARCHARCollaboration for Non-Controlled Substancesconfiguration/entityTypes/HCP/attributes/License/attributes/CollaborativeNonctrlCOLLABORATIVE_CTRLVARCHARCollaboration for Controlled Substancesconfiguration/entityTypes/HCP/attributes/License/attributes/CollaborativeCtrlINCLUSIONARYVARCHARInclusionaryconfiguration/entityTypes/HCP/attributes/License/attributes/InclusionaryEXCLUSIONARYVARCHARExclusionaryconfiguration/entityTypes/HCP/attributes/License/attributes/ExclusionaryDELEGATION_NON_CTRLVARCHARDelegationNonCtrlconfiguration/entityTypes/HCP/attributes/License/attributes/DelegationNonCtrlDELEGATION_CTRLVARCHARDelegation for Controlled Substancesconfiguration/entityTypes/HCP/attributes/License/attributes/DelegationCtrlDISCIPLINARY_ACTION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/License/attributes/DisciplinaryActionStatusADDRESSReltio URI: configuration/entityTypes/HCP/attributes/Address, configuration/entityTypes/HCO/attributes/AddressMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePRIMARY_AFFILIATIONVARCHARconfiguration/relationTypes/HasAddress/attributes/PrimaryAffiliation, configuration/relationTypes/HasAddress/attributes/PrimaryAffiliationLKUP_IMS_YES_NOSOURCE_ADDRESS_IDVARCHARconfiguration/relationTypes/HasAddress/attributes/SourceAddressID, configuration/relationTypes/HasAddress/attributes/SourceAddressIDADDRESS_TYPEVARCHARconfiguration/relationTypes/HasAddress/attributes/AddressType, configuration/relationTypes/HasAddress/attributes/AddressTypeLKUP_IMS_ADDR_TYPECARE_OFVARCHARconfiguration/relationTypes/HasAddress/attributes/CareOf, configuration/relationTypes/HasAddress/attributes/CareOfPRIMARYBOOLEANconfiguration/relationTypes/HasAddress/attributes/Primary, configuration/relationTypes/HasAddress/attributes/PrimaryADDRESS_RANKVARCHARconfiguration/relationTypes/HasAddress/attributes/AddressRank, configuration/relationTypes/HasAddress/attributes/AddressRankSOURCE_NAMEVARCHARconfiguration/relationTypes/HasAddress/attributes/SourceAddressInfo/attributes/SourceName, configuration/relationTypes/HasAddress/attributes/SourceAddressInfo/attributes/SourceNameSOURCE_LOCATION_IDVARCHARconfiguration/relationTypes/HasAddress/attributes/SourceAddressInfo/attributes/SourceLocationId, configuration/relationTypes/HasAddress/attributes/SourceAddressInfo/attributes/SourceLocationIdADDRESS_LINE1VARCHARconfiguration/entityTypes/Location/attributes/AddressLine1, configuration/entityTypes/Location/attributes/AddressLine1ADDRESS_LINE2VARCHARconfiguration/entityTypes/Location/attributes/AddressLine2, configuration/entityTypes/Location/attributes/AddressLine2ADDRESS_LINE3VARCHARAddressLine3configuration/entityTypes/Location/attributes/AddressLine3, configuration/entityTypes/Location/attributes/AddressLine3ADDRESS_LINE4VARCHARAddressLine4configuration/entityTypes/Location/attributes/AddressLine4, configuration/entityTypes/Location/attributes/AddressLine4PREMISEVARCHARconfiguration/entityTypes/Location/attributes/Premise, configuration/entityTypes/Location/attributes/PremiseSTREETVARCHARconfiguration/entityTypes/Location/attributes/Street, configuration/entityTypes/Location/attributes/StreetFLOORVARCHARN/Aconfiguration/entityTypes/Location/attributes/Floor, configuration/entityTypes/Location/attributes/FloorBUILDINGVARCHARN/Aconfiguration/entityTypes/Location/attributes/Building, configuration/entityTypes/Location/attributes/BuildingCITYVARCHARconfiguration/entityTypes/Location/attributes/City, configuration/entityTypes/Location/attributes/CitySTATE_PROVINCEVARCHARconfiguration/entityTypes/Location/attributes/StateProvince, configuration/entityTypes/Location/attributes/StateProvinceSTATE_PROVINCE_CODEVARCHARconfiguration/entityTypes/Location/attributes/StateProvinceCode, configuration/entityTypes/Location/attributes/StateProvinceCodeLKUP_IMS_STATE_CODEPOSTAL_CODEVARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/PostalCode, configuration/entityTypes/Location/attributes/Zip/attributes/PostalCodeZIP5VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip5, configuration/entityTypes/Location/attributes/Zip/attributes/Zip5ZIP4VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip4, configuration/entityTypes/Location/attributes/Zip/attributes/Zip4COUNTRYVARCHARconfiguration/entityTypes/Location/attributes/CountryLKUP_IMS_COUNTRY_CODECBSA_CODEVARCHARCore Based Statistical Areaconfiguration/entityTypes/Location/attributes/CBSACode, configuration/entityTypes/Location/attributes/CBSACodeCBSA_CDFIPS_COUNTY_CODEVARCHARFIPS county Codeconfiguration/entityTypes/Location/attributes/FIPSCountyCode, configuration/entityTypes/Location/attributes/FIPSCountyCodeFIPS_STATE_CODEVARCHARFIPS State Codeconfiguration/entityTypes/Location/attributes/FIPSStateCode, configuration/entityTypes/Location/attributes/FIPSStateCodeDPVVARCHARUSPS delivery point validation. R = Range Check; C = Clerk; F = Formally Valid; V = DPV Validconfiguration/entityTypes/Location/attributes/DPV, configuration/entityTypes/Location/attributes/DPVMSAVARCHARMetropolitan Statistical Area for a businessconfiguration/entityTypes/Location/attributes/MSA, configuration/entityTypes/Location/attributes/MSALATITUDEVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/LatitudeLONGITUDEVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/LongitudeGEO_ACCURACYVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/GeoAccuracyGEO_CODING_SYSTEMVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/GeoCodingSystemADDRESS_INPUTVARCHARconfiguration/entityTypes/Location/attributes/AddressInput, configuration/entityTypes/Location/attributes/AddressInputSUB_ADMINISTRATIVE_AREAVARCHARThis field holds the smallest geographic data element within a country. For instance, USA County.configuration/entityTypes/Location/attributes/SubAdministrativeArea, configuration/entityTypes/Location/attributes/SubAdministrativeAreaPOSTAL_CITYVARCHARconfiguration/entityTypes/Location/attributes/PostalCity, configuration/entityTypes/Location/attributes/PostalCityLOCALITYVARCHARThis field holds the most common population center data element within a country. For instance, USA City, Canadian Municipality.configuration/entityTypes/Location/attributes/Locality, configuration/entityTypes/Location/attributes/LocalityVERIFICATION_STATUSVARCHARconfiguration/entityTypes/Location/attributes/VerificationStatus, configuration/entityTypes/Location/attributes/VerificationStatusSTATUS_CHANGE_DATEDATEStatus Change Dateconfiguration/entityTypes/Location/attributes/StatusChangeDate, configuration/entityTypes/Location/attributes/StatusChangeDateADDRESS_STATUSVARCHARStatus of the Addressconfiguration/entityTypes/Location/attributes/AddressStatus, configuration/entityTypes/Location/attributes/AddressStatusACTIVE_ADDRESSBOOLEANconfiguration/relationTypes/HasAddress/attributes/Active, configuration/relationTypes/HasAddress/attributes/ActiveLOC_CONF_INDVARCHARconfiguration/relationTypes/HasAddress/attributes/LocConfInd, configuration/relationTypes/HasAddress/attributes/LocConfIndLKUP_IMS_LOCATION_CONFIDENCEBEST_RECORDVARCHARconfiguration/relationTypes/HasAddress/attributes/BestRecord, configuration/relationTypes/HasAddress/attributes/BestRecordRELATION_STATUS_CHANGE_DATEDATEconfiguration/relationTypes/HasAddress/attributes/RelationStatusChangeDate, configuration/relationTypes/HasAddress/attributes/RelationStatusChangeDateVALIDATION_STATUSVARCHARValidation status of the Address. When Addresses are merged, the loser Address is set to INVL.configuration/relationTypes/HasAddress/attributes/ValidationStatus, configuration/relationTypes/HasAddress/attributes/ValidationStatusLKUP_IMS_VAL_STATUSSTATUSVARCHARconfiguration/relationTypes/HasAddress/attributes/Status, configuration/relationTypes/HasAddress/attributes/StatusLKUP_IMS_ADDR_STATUSHCO_NAMEVARCHARconfiguration/relationTypes/HasAddress/attributes/HcoName, configuration/relationTypes/HasAddress/attributes/HcoNameMAIN_HCO_NAMEVARCHARconfiguration/relationTypes/HasAddress/attributes/MainHcoName, configuration/relationTypes/HasAddress/attributes/MainHcoNameBUILD_LABELVARCHARconfiguration/relationTypes/HasAddress/attributes/BuildLabel, configuration/relationTypes/HasAddress/attributes/BuildLabelPO_BOXVARCHARconfiguration/relationTypes/HasAddress/attributes/POBox, configuration/relationTypes/HasAddress/attributes/POBoxVALIDATION_REASONVARCHARconfiguration/relationTypes/HasAddress/attributes/ValidationReason, configuration/relationTypes/HasAddress/attributes/ValidationReasonLKUP_IMS_VAL_STATUS_CHANGE_REASONVALIDATION_CHANGE_DATEDATEconfiguration/relationTypes/HasAddress/attributes/ValidationChangeDate, configuration/relationTypes/HasAddress/attributes/ValidationChangeDateSTATUS_REASON_CODEVARCHARconfiguration/relationTypes/HasAddress/attributes/StatusReasonCode, configuration/relationTypes/HasAddress/attributes/StatusReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODEPRIMARY_MAILBOOLEANconfiguration/relationTypes/HasAddress/attributes/PrimaryMail, configuration/relationTypes/HasAddress/attributes/PrimaryMailVISIT_ACTIVITYVARCHARconfiguration/relationTypes/HasAddress/attributes/VisitActivity, configuration/relationTypes/HasAddress/attributes/VisitActivityDERIVED_ADDRESSVARCHARconfiguration/relationTypes/HasAddress/attributes/derivedAddress, configuration/relationTypes/HasAddress/attributes/derivedAddressNEIGHBORHOODVARCHARconfiguration/entityTypes/Location/attributes/Neighborhood, configuration/entityTypes/Location/attributes/NeighborhoodAVCVARCHARconfiguration/entityTypes/Location/attributes/AVC, configuration/entityTypes/Location/attributes/AVCCOUNTRY_CODEVARCHARconfiguration/entityTypes/Location/attributes/CountryLKUP_IMS_COUNTRY_CODEGEO_LOCATION.LATITUDEVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/LatitudeGEO_LOCATION.LONGITUDEVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/LongitudeGEO_LOCATION.GEO_ACCURACYVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/GeoAccuracyGEO_LOCATION.GEO_CODING_SYSTEMVARCHARconfiguration/entityTypes/Location/attributes/GeoLocation/attributes/GeoCodingSystemADDRESS_PHONEReltio URI: configuration/relationTypes/HasAddress/attributes/Phone, configuration/relationTypes/HasAddress/attributes/PhoneMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionPHONE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPE_IMSVARCHARconfiguration/relationTypes/HasAddress/attributes/Phone/attributes/TypeIMS, configuration/relationTypes/HasAddress/attributes/Phone/attributes/TypeIMSLKUP_IMS_COMMUNICATION_TYPENUMBERVARCHARconfiguration/relationTypes/HasAddress/attributes/Phone/attributes/Number, configuration/relationTypes/HasAddress/attributes/Phone/attributes/NumberEXTENSIONVARCHARconfiguration/relationTypes/HasAddress/attributes/Phone/attributes/Extension, configuration/relationTypes/HasAddress/attributes/Phone/attributes/ExtensionRANKVARCHARconfiguration/relationTypes/HasAddress/attributes/Phone/attributes/Rank, configuration/relationTypes/HasAddress/attributes/Phone/attributes/RankACTIVE_ADDRESS_PHONEBOOLEANconfiguration/relationTypes/HasAddress/attributes/Phone/attributes/Active, configuration/relationTypes/HasAddress/attributes/Phone/attributes/ActiveBEST_PHONE_INDICATORVARCHARconfiguration/relationTypes/HasAddress/attributes/Phone/attributes/BestPhoneIndicator, configuration/relationTypes/HasAddress/attributes/Phone/attributes/BestPhoneIndicatorADDRESS_DEAReltio URI: configuration/relationTypes/HasAddress/attributes/DEA, configuration/relationTypes/HasAddress/attributes/DEAMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionDEA_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNUMBERVARCHARconfiguration/relationTypes/HasAddress/attributes/DEA/attributes/Number, configuration/relationTypes/HasAddress/attributes/DEA/attributes/NumberEXPIRATION_DATEDATEconfiguration/relationTypes/HasAddress/attributes/DEA/attributes/ExpirationDate, configuration/relationTypes/HasAddress/attributes/DEA/attributes/ExpirationDateSTATUSVARCHARconfiguration/relationTypes/HasAddress/attributes/DEA/attributes/Status, configuration/relationTypes/HasAddress/attributes/DEA/attributes/StatusLKUP_IMS_IDENTIFIER_STATUSDRUG_SCHEDULEVARCHARconfiguration/relationTypes/HasAddress/attributes/DEA/attributes/DrugSchedule, configuration/relationTypes/HasAddress/attributes/DEA/attributes/DrugScheduleBUSINESS_ACTIVITY_CODEVARCHARBusiness Activity Codeconfiguration/relationTypes/HasAddress/attributes/DEA/attributes/BusinessActivityCode, configuration/relationTypes/HasAddress/attributes/DEA/attributes/BusinessActivityCodeSUB_BUSINESS_ACTIVITY_CODEVARCHARSub Business Activity Codeconfiguration/relationTypes/HasAddress/attributes/DEA/attributes/SubBusinessActivityCode, configuration/relationTypes/HasAddress/attributes/DEA/attributes/SubBusinessActivityCodeDEA_CHANGE_REASON_CODEVARCHARDEA Change Reason Codeconfiguration/relationTypes/HasAddress/attributes/DEA/attributes/DEAChangeReasonCode, configuration/relationTypes/HasAddress/attributes/DEA/attributes/DEAChangeReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODEAUTHORIZATION_STATUSVARCHARAuthorization Statusconfiguration/relationTypes/HasAddress/attributes/DEA/attributes/AuthorizationStatus, configuration/relationTypes/HasAddress/attributes/DEA/attributes/AuthorizationStatusLKUP_IMS_IDENTIFIER_STATUSADDRESS_OFFICE_INFORMATIONReltio URI: configuration/relationTypes/HasAddress/attributes/OfficeInformation, configuration/relationTypes/HasAddress/attributes/OfficeInformationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionOFFICE_INFORMATION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeBEST_TIMESVARCHARconfiguration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/BestTimes, configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/BestTimesAPPT_REQUIREDBOOLEANconfiguration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/ApptRequired, configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/ApptRequiredOFFICE_NOTESVARCHARconfiguration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/OfficeNotes, configuration/relationTypes/HasAddress/attributes/OfficeInformation/attributes/OfficeNotesSPECIALITIESReltio URI: configuration/entityTypes/HCP/attributes/Specialities, configuration/entityTypes/HCO/attributes/SpecialitiesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPECIALITIES_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSPECIALTY_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/SpecialtyType, configuration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyTypeLKUP_IMS_SPECIALTY_TYPESPECIALTYVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialty, configuration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyLKUP_IMS_SPECIALTYRANKVARCHARSpecialty Rankconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Rank, configuration/entityTypes/HCO/attributes/Specialities/attributes/RankDESCVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Specialities/attributes/DescGROUPVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Group, configuration/entityTypes/HCO/attributes/Specialities/attributes/GroupSOURCE_CDVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Specialities/attributes/SourceCDSPECIALTY_DETAILVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/SpecialtyDetail, configuration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyDetailPROFESSION_CODEVARCHARProfessionconfiguration/entityTypes/HCP/attributes/Specialities/attributes/ProfessionCodeLKUP_IMS_PROFESSIONPRIMARY_SPECIALTY_FLAGBOOLEANconfiguration/entityTypes/HCP/attributes/Specialities/attributes/PrimarySpecialtyFlag, configuration/entityTypes/HCO/attributes/Specialities/attributes/PrimarySpecialtyFlagSORT_ORDERVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/SortOrder, configuration/entityTypes/HCO/attributes/Specialities/attributes/SortOrderBEST_RECORDVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/BestRecord, configuration/entityTypes/HCO/attributes/Specialities/attributes/BestRecordSUB_SPECIALTYVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/SubSpecialty, configuration/entityTypes/HCO/attributes/Specialities/attributes/SubSpecialtyLKUP_IMS_SPECIALTYSUB_SPECIALTY_RANKVARCHARSubSpecialty Rankconfiguration/entityTypes/HCP/attributes/Specialities/attributes/SubSpecialtyRank, configuration/entityTypes/HCO/attributes/Specialities/attributes/SubSpecialtyRankTRUSTED_INDICATORVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/TrustedIndicator, configuration/entityTypes/HCO/attributes/Specialities/attributes/TrustedIndicatorLKUP_IMS_YES_NORAW_SPECIALTYVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/RawSpecialty, configuration/entityTypes/HCO/attributes/Specialities/attributes/RawSpecialtyRAW_SPECIALTY_DESCRIPTIONVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/RawSpecialtyDescription, configuration/entityTypes/HCO/attributes/Specialities/attributes/RawSpecialtyDescriptionIDENTIFIERSReltio URI: configuration/entityTypes/HCP/attributes/Identifiers, configuration/entityTypes/HCO/attributes/IdentifiersMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameIDENTIFIERS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Type, configuration/entityTypes/HCO/attributes/Identifiers/attributes/TypeLKUP_IMS_HCP_IDENTIFIER_TYPE,LKUP_IMS_HCO_IDENTIFIER_TYPEIDVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/ID, configuration/entityTypes/HCO/attributes/Identifiers/attributes/IDORDERVARCHARDisplays the order of priority for an MPN for those facilities that share an MPN. Valid values are: P ?the MPN on a business record is the primary identifier for the business and O ?the MPN is a secondary identifier. (Using P for the MPN supports aggregating clinical volumes and avoids double counting).configuration/entityTypes/HCP/attributes/Identifiers/attributes/Order, configuration/entityTypes/HCO/attributes/Identifiers/attributes/OrderCATEGORYVARCHARAdditional information about the identifer. For a DDD identifer, the DDD subcategory code (e.g. H4, D1, A2). For a DEA identifier, contains the DEA activity code (e.g. M for Mid Level Practitioner)configuration/entityTypes/HCP/attributes/Identifiers/attributes/Category, configuration/entityTypes/HCO/attributes/Identifiers/attributes/CategoryLKUP_IMS_IDENTIFIERS_CATEGORYSTATUSVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Status, configuration/entityTypes/HCO/attributes/Identifiers/attributes/StatusLKUP_IMS_IDENTIFIER_STATUSAUTHORIZATION_STATUSVARCHARAuthorization Statusconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/AuthorizationStatus, configuration/entityTypes/HCO/attributes/Identifiers/attributes/AuthorizationStatusLKUP_IMS_IDENTIFIER_STATUSDEACTIVATION_REASON_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/DeactivationReasonCode, configuration/entityTypes/HCO/attributes/Identifiers/attributes/DeactivationReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODEDEACTIVATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/DeactivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/DeactivationDateREACTIVATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/ReactivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ReactivationDateNATIONAL_ID_ATTRIBUTEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/NationalIdAttribute, configuration/entityTypes/HCO/attributes/Identifiers/attributes/NationalIdAttributeAMAMDDO_FLAGVARCHARAMA MD-DO Flagconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/AMAMDDOFlagMAJOR_PROF_ACTVARCHARMajor Professional Activity Codeconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/MajorProfActHOSPITAL_HOURSVARCHARHospitalHoursconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/HospitalHoursAMA_HOSPITAL_IDVARCHARAMAHospitalIDconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/AMAHospitalIDPRACTICE_TYPE_CODEVARCHARPracticeTypeCodeconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/PracticeTypeCodeEMPLOYMENT_TYPE_CODEVARCHAREmploymentTypeCodeconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/EmploymentTypeCodeBIRTH_CITYVARCHARBirthCityconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/BirthCityBIRTH_STATEVARCHARBirthStateconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/BirthStateBIRTH_COUNTRYVARCHARBirthCountryconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/BirthCountryMEDICAL_SCHOOLVARCHARMedicalSchoolconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/MedicalSchoolGRADUATION_YEARVARCHARGraduationYearconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/GraduationYearNUM_OF_PYSICIANSVARCHARNumOfPysiciansconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/NumOfPysiciansSTATEVARCHARLicenseStateconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/State, configuration/entityTypes/HCO/attributes/Identifiers/attributes/StateLKUP_IMS_STATE_CODETRUSTED_INDICATORVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/TrustedIndicator, configuration/entityTypes/HCO/attributes/Identifiers/attributes/TrustedIndicatorLKUP_IMS_YES_NOHARD_LINK_INDICATORVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/HardLinkIndicator, configuration/entityTypes/HCO/attributes/Identifiers/attributes/HardLinkIndicatorLKUP_IMS_YES_NOLAST_VERIFICATION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/LastVerificationStatus, configuration/entityTypes/HCO/attributes/Identifiers/attributes/LastVerificationStatusLAST_VERIFICATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/LastVerificationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/LastVerificationDateACTIVATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/ActivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ActivationDateSPEAKERReltio URI: configuration/entityTypes/HCP/attributes/SpeakerMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPEAKER_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeIS_SPEAKERBOOLEANconfiguration/entityTypes/HCP/attributes/Speaker/attributes/IsSpeakerIS_COMPANY_APPROVED_SPEAKERBOOLEANAttribute to track if an HCP is a COMPANY approved speakerconfiguration/entityTypes/HCP/attributes/Speaker/attributes/IsCOMPANYApprovedSpeakerLAST_BRIEFING_DATEDATETrack the last date that the HCP received the briefing/training to be certified as an approved COMPANY Speakerconfiguration/entityTypes/HCP/attributes/Speaker/attributes/LastBriefingDateSPEAKER_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Speaker/attributes/SpeakerStatusLKUP_SPEAKERSTATUSSPEAKER_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Speaker/attributes/SpeakerTypeLKUP_SPEAKERTYPESPEAKER_LEVELVARCHARconfiguration/entityTypes/HCP/attributes/Speaker/attributes/SpeakerLevelLKUP_SPEAKERLEVELHCP_WORKPLACE_MAIN_HCOReltio URI: configuration/entityTypes/HCO/attributes/MainHCOMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameWORKPLACE_URIVARCHARgenerated key descriptionMAINHCO_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNAMEVARCHARNameconfiguration/entityTypes/HCO/attributes/NameOTHER_NAMESVARCHAROther Namesconfiguration/entityTypes/HCO/attributes/OtherNamesTYPE_CODEVARCHARCustomer Typeconfiguration/entityTypes/HCO/attributes/TypeCodeLKUP_IMS_HCO_CUST_TYPESOURCE_IDVARCHARSource IDconfiguration/entityTypes/HCO/attributes/SourceIDVALIDATION_STATUSVARCHARconfiguration/relationTypes/RLE.MAI/attributes/ValidationStatusLKUP_IMS_VAL_STATUSVALIDATION_CHANGE_DATEDATEconfiguration/relationTypes/RLE.MAI/attributes/ValidationChangeDateAFFILIATION_STATUSVARCHARconfiguration/relationTypes/RLE.MAI/attributes/AffiliationStatusLKUP_IMS_STATUSCOUNTRYVARCHARCountry Codeconfiguration/relationTypes/RLE.MAI/attributes/CountryLKUP_IMS_COUNTRY_CODEHCP_WORKPLACE_MAIN_HCO_CLASSOF_TRADE_NReltio URI: configuration/entityTypes/HCO/attributes/ClassofTradeNMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameWORKPLACE_URIVARCHARgenerated key descriptionMAINHCO_URIVARCHARgenerated key descriptionCLASSOFTRADEN_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePRIORITYVARCHARNumeric code for the primary class of tradeconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/PriorityCLASSIFICATIONVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/ClassificationLKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATIONFACILITY_TYPEVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityTypeLKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPESPECIALTYVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/SpecialtyLKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTYHCP_MAIN_WORKPLACE_CLASSOF_TRADE_NReltio URI: configuration/entityTypes/HCO/attributes/ClassofTradeNMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameMAINWORKPLACE_URIVARCHARgenerated key descriptionCLASSOFTRADEN_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePRIORITYVARCHARNumeric code for the primary class of tradeconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/PriorityCLASSIFICATIONVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/ClassificationLKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATIONFACILITY_TYPEVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityTypeLKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPESPECIALTYVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/SpecialtyLKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTYPHONEReltio URI: configuration/entityTypes/HCP/attributes/Phone, configuration/entityTypes/HCO/attributes/PhoneMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePHONE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPE_IMSVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/TypeIMS, configuration/entityTypes/HCO/attributes/Phone/attributes/TypeIMSLKUP_IMS_COMMUNICATION_TYPENUMBERVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/Number, configuration/entityTypes/HCO/attributes/Phone/attributes/NumberEXTENSIONVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/Extension, configuration/entityTypes/HCO/attributes/Phone/attributes/ExtensionRANKVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/Rank, configuration/entityTypes/HCO/attributes/Phone/attributes/RankCOUNTRY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/CountryCode, configuration/entityTypes/HCO/attributes/Phone/attributes/CountryCodeLKUP_IMS_COUNTRY_CODEAREA_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/AreaCode, configuration/entityTypes/HCO/attributes/Phone/attributes/AreaCodeLOCAL_NUMBERVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/LocalNumber, configuration/entityTypes/HCO/attributes/Phone/attributes/LocalNumberFORMATTED_NUMBERVARCHARFormatted number of the phoneconfiguration/entityTypes/HCP/attributes/Phone/attributes/FormattedNumber, configuration/entityTypes/HCO/attributes/Phone/attributes/FormattedNumberVALIDATION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/ValidationStatus, configuration/entityTypes/HCO/attributes/Phone/attributes/ValidationStatusVALIDATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Phone/attributes/ValidationDate, configuration/entityTypes/HCO/attributes/Phone/attributes/ValidationDateLINE_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/LineType, configuration/entityTypes/HCO/attributes/Phone/attributes/LineTypeFORMAT_MASKVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/FormatMask, configuration/entityTypes/HCO/attributes/Phone/attributes/FormatMaskDIGIT_COUNTVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/DigitCount, configuration/entityTypes/HCO/attributes/Phone/attributes/DigitCountGEO_AREAVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/GeoArea, configuration/entityTypes/HCO/attributes/Phone/attributes/GeoAreaGEO_COUNTRYVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/GeoCountry, configuration/entityTypes/HCO/attributes/Phone/attributes/GeoCountryDQ_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/DQCode, configuration/entityTypes/HCO/attributes/Phone/attributes/DQCodeACTIVE_PHONEBOOLEANDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Phone/attributes/ActiveBEST_PHONE_INDICATORVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/BestPhoneIndicator, configuration/entityTypes/HCO/attributes/Phone/attributes/BestPhoneIndicatorPHONE_SOURCE_DATAReltio URI: configuration/entityTypes/HCP/attributes/Phone/attributes/SourceData, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceDataMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePHONE_URIVARCHARgenerated key descriptionSOURCE_DATA_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDATASET_IDENTIFIERVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/DatasetIdentifier, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/DatasetIdentifierDATASET_PARTY_IDENTIFIERVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/DatasetPartyIdentifier, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/DatasetPartyIdentifierDATASET_PHONE_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/DatasetPhoneType, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/DatasetPhoneTypeLKUP_IMS_COMMUNICATION_TYPERAW_DATASET_PHONE_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/RawDatasetPhoneType, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/RawDatasetPhoneTypeBEST_PHONE_INDICATORVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/SourceData/attributes/BestPhoneIndicator, configuration/entityTypes/HCO/attributes/Phone/attributes/SourceData/attributes/BestPhoneIndicatorEMAILReltio URI: configuration/entityTypes/HCP/attributes/Email, configuration/entityTypes/HCO/attributes/EmailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMAIL_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPE_IMSVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/TypeIMS, configuration/entityTypes/HCO/attributes/Email/attributes/TypeIMSLKUP_IMS_EMAIL_TYPEEMAILVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/Email, configuration/entityTypes/HCO/attributes/Email/attributes/EmailDOMAINVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/Domain, configuration/entityTypes/HCO/attributes/Email/attributes/DomainDOMAIN_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/DomainType, configuration/entityTypes/HCO/attributes/Email/attributes/DomainTypeUSERNAMEVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/Username, configuration/entityTypes/HCO/attributes/Email/attributes/UsernameRANKVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/Rank, configuration/entityTypes/HCO/attributes/Email/attributes/RankVALIDATION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/ValidationStatus, configuration/entityTypes/HCO/attributes/Email/attributes/ValidationStatusVALIDATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Email/attributes/ValidationDate, configuration/entityTypes/HCO/attributes/Email/attributes/ValidationDateACTIVE_EMAIL_HCPVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/ActiveDQ_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/DQCode, configuration/entityTypes/HCO/attributes/Email/attributes/DQCodeSOURCE_CDVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Email/attributes/SourceCDACTIVE_EMAIL_HCOBOOLEANconfiguration/entityTypes/HCO/attributes/Email/attributes/ActiveDISCLOSUREDisclosure - Reporting derived attributesReltio URI: configuration/entityTypes/HCP/attributes/Disclosure, configuration/entityTypes/HCO/attributes/DisclosureMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDISCLOSURE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDGS_CATEGORYVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategory, configuration/entityTypes/HCO/attributes/Disclosure/attributes/DGSCategoryLKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCODGS_TITLEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitleLKUP_BENEFITTITLEDGS_QUALITYVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQualityLKUP_BENEFITQUALITYDGS_SPECIALTYVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialtyLKUP_BENEFITSPECIALTYCONTRACT_CLASSIFICATIONVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/ContractClassificationLKUP_CONTRACTCLASSIFICATIONCONTRACT_CLASSIFICATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/ContractClassificationDateMILITARYBOOLEANconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/MilitaryLEGALSTATUSVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/LEGALSTATUSLKUP_LEGALSTATUSTHIRD_PARTY_VERIFYReltio URI: configuration/entityTypes/HCP/attributes/ThirdPartyVerify, configuration/entityTypes/HCO/attributes/ThirdPartyVerifyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTHIRD_PARTY_VERIFY_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSEND_FOR_VERIFYVARCHARconfiguration/entityTypes/HCP/attributes/ThirdPartyVerify/attributes/SendForVerify, configuration/entityTypes/HCO/attributes/ThirdPartyVerify/attributes/SendForVerifyLKUP_IMS_SEND_FOR_VALIDATIONVERIFY_DATEVARCHARconfiguration/entityTypes/HCP/attributes/ThirdPartyVerify/attributes/VerifyDate, configuration/entityTypes/HCO/attributes/ThirdPartyVerify/attributes/VerifyDatePRIVACY_PREFERENCESReltio URI: configuration/entityTypes/HCP/attributes/PrivacyPreferences, configuration/entityTypes/HCO/attributes/PrivacyPreferencesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePRIVACY_PREFERENCES_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeOPT_OUTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutOPT_OUT_START_DATEDATEconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutStartDateALLOWED_TO_CONTACTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/AllowedToContactPHONE_OPT_OUTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PhoneOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/PhoneOptOutEMAIL_OPT_OUTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/EmailOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/EmailOptOutFAX_OPT_OUTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/FaxOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/FaxOptOutVISIT_OPT_OUTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/VisitOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/VisitOptOutAMA_NO_CONTACTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/AMANoContactPDRPBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PDRPPDRP_DATEDATEconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PDRPDateTEXT_MESSAGE_OPT_OUTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/TextMessageOptOutMAIL_OPT_OUTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/MailOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/MailOptOutOPT_OUT_CHANGE_DATEDATEThe date the opt out indicator was changedconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutChangeDateREMOTE_OPT_OUTBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/RemoteOptOut, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/RemoteOptOutOPT_OUT_ONE_KEYBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutOneKey, configuration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/OptOutOneKeyOPT_OUT_SAFE_HARBORBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutSafeHarborKEY_OPINION_LEADERBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/KeyOpinionLeaderRESIDENT_INDICATORBOOLEANconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/ResidentIndicatorALLOW_SAFE_HARBORBOOLEANconfiguration/entityTypes/HCO/attributes/PrivacyPreferences/attributes/AllowSafeHarborSANCTIONReltio URI: configuration/entityTypes/HCP/attributes/SanctionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSANCTION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARCourt sanction Id for any case.configuration/entityTypes/HCP/attributes/Sanction/attributes/SanctionIdACTION_CODEVARCHARCourt sanction code for a caseconfiguration/entityTypes/HCP/attributes/Sanction/attributes/ActionCodeACTION_DESCRIPTIONVARCHARconfiguration/entityTypes/HCP/attributes/Sanction/attributes/ActionDescriptionBOARD_CODEVARCHARCourt case board idconfiguration/entityTypes/HCP/attributes/Sanction/attributes/BoardCodeBOARD_DESCVARCHARcourt case board descriptionconfiguration/entityTypes/HCP/attributes/Sanction/attributes/BoardDescACTION_DATEDATEconfiguration/entityTypes/HCP/attributes/Sanction/attributes/ActionDateSANCTION_PERIOD_START_DATEDATEconfiguration/entityTypes/HCP/attributes/Sanction/attributes/SanctionPeriodStartDateSANCTION_PERIOD_END_DATEDATEconfiguration/entityTypes/HCP/attributes/Sanction/attributes/SanctionPeriodEndDateMONTH_DURATIONVARCHARconfiguration/entityTypes/HCP/attributes/Sanction/attributes/MonthDurationFINE_AMOUNTVARCHARconfiguration/entityTypes/HCP/attributes/Sanction/attributes/FineAmountOFFENSE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Sanction/attributes/OffenseCodeOFFENSE_DESCRIPTIONVARCHARconfiguration/entityTypes/HCP/attributes/Sanction/attributes/OffenseDescriptionOFFENSE_DATEDATEconfiguration/entityTypes/HCP/attributes/Sanction/attributes/OffenseDateHCP_SANCTIONSReltio URI: configuration/entityTypes/HCP/attributes/SanctionsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSANCTIONS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeIDENTIFIER_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/IdentifierTypeLKUP_IMS_HCP_IDENTIFIER_TYPEIDENTIFIER_IDVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/IdentifierIDTYPE_CODEVARCHARType of sanction/restriction for a given providedconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/TypeCodeLKUP_IMS_SNCTN_RSTR_ACTNDEACTIVATION_REASON_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/DeactivationReasonCodeLKUP_IMS_SNCTN_RSTR_DACT_RSNDISPOSITION_CATEGORY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/DispositionCategoryCodeLKUP_IMS_SNCTN_RSTR_DSP_CATGEXCLUSION_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/ExclusionCodeLKUP_IMS_SNCTN_RSTR_EXCLDESCRIPTIONVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/DescriptionURLVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/URLISSUED_DATEDATEconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/IssuedDateEFFECTIVE_DATEDATEconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/EffectiveDateREINSTATEMENT_DATEDATEconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/ReinstatementDateIS_STATE_WAIVERBOOLEANconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/IsStateWaiverSTATUS_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/StatusCodeLKUP_IMS_IDENTIFIER_STATUSSOURCE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/SourceCodeLKUP_IMS_SNCTN_RSTR_SRCPUBLICATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/PublicationDateGOVERNMENT_LEVEL_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Sanctions/attributes/GovernmentLevelCodeLKUP_IMS_GOVT_LVLHCP_GSA_SANCTIONReltio URI: configuration/entityTypes/HCP/attributes/GSASanctionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameGSA_SANCTION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/SanctionIdFIRST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/FirstNameMIDDLE_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/MiddleNameLAST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/LastNameSUFFIX_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/SuffixNameCITYVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/CitySTATEVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/StateZIPVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/ZipACTION_DATEVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/ActionDateTERM_DATEVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/TermDateAGENCYVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/AgencyCONFIDENCEVARCHARconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/ConfidenceDEGREESDO NOT USE THIS ATTRIBUTE - will be deprecatedReltio URI: configuration/entityTypes/HCP/attributes/DegreesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDEGREES_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDEGREEVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Degrees/attributes/DegreeDEGREEBEST_DEGREEVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Degrees/attributes/BestDegreeCERTIFICATESReltio URI: configuration/entityTypes/HCP/attributes/CertificatesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCERTIFICATES_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCERTIFICATE_IDVARCHARconfiguration/entityTypes/HCP/attributes/Certificates/attributes/CertificateIdNAMEVARCHARconfiguration/entityTypes/HCP/attributes/Certificates/attributes/NameBOARD_IDVARCHARconfiguration/entityTypes/HCP/attributes/Certificates/attributes/BoardIdBOARD_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/Certificates/attributes/BoardNameINTERNAL_HCP_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Certificates/attributes/InternalHCPStatusINTERNAL_HCP_INACTIVE_REASON_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Certificates/attributes/InternalHCPInactiveReasonCodeINTERNAL_SAMPLING_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Certificates/attributes/InternalSamplingStatusPVS_ELIGIBILTYVARCHARconfiguration/entityTypes/HCP/attributes/Certificates/attributes/PVSEligibiltyEMPLOYMENTReltio URI: configuration/entityTypes/HCP/attributes/EmploymentMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMPLOYMENT_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTITLEVARCHARconfiguration/relationTypes/Employment/attributes/TitleSUMMARYVARCHARconfiguration/relationTypes/Employment/attributes/SummaryIS_CURRENTBOOLEANconfiguration/relationTypes/Employment/attributes/IsCurrentNAMEVARCHARNameconfiguration/entityTypes/Organization/attributes/NameCREDENTIALDO NOT USE THIS ATTRIBUTE - will be deprecatedReltio URI: configuration/entityTypes/HCP/attributes/CredentialMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCREDENTIAL_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeRANKVARCHARconfiguration/entityTypes/HCP/attributes/Credential/attributes/RankCREDENTIALVARCHARconfiguration/entityTypes/HCP/attributes/Credential/attributes/CredentialCREDPROFESSIONReltio URI: configuration/entityTypes/HCP/attributes/ProfessionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePROFESSION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePROFESSION_CODEVARCHARProfessionconfiguration/entityTypes/HCP/attributes/Profession/attributes/ProfessionCodeLKUP_IMS_PROFESSIONRANKVARCHARProfession Rankconfiguration/entityTypes/HCP/attributes/Profession/attributes/RankEDUCATIONReltio URI: configuration/entityTypes/HCP/attributes/EducationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEDUCATION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSCHOOL_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/SchoolNameLKUP_IMS_SCHOOL_CODETYPEVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/TypeDEGREEVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/DegreeYEAR_OF_GRADUATIONVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Education/attributes/YearOfGraduationGRADUATEDBOOLEANDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Education/attributes/GraduatedGPAVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/GPAYEARS_IN_PROGRAMVARCHARYear in Grad Training Program, Year in training in current programconfiguration/entityTypes/HCP/attributes/Education/attributes/YearsInProgramSTART_YEARVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/StartYearEND_YEARVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/EndYearFIELDOF_STUDYVARCHARSpecialty Focus or Specialty Trainingconfiguration/entityTypes/HCP/attributes/Education/attributes/FieldofStudyELIGIBILITYVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Education/attributes/EligibilityEDUCATION_TYPEVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Education/attributes/EducationTypeRANKVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/RankMEDICAL_SCHOOLVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/MedicalSchoolTAXONOMYReltio URI: configuration/entityTypes/HCP/attributes/Taxonomy, configuration/entityTypes/HCO/attributes/TaxonomyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTAXONOMY_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTAXONOMYVARCHARconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Taxonomy, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/TaxonomyTAXONOMY_CD,LKUP_IMS_JURIDIC_CATEGORYTYPEVARCHARconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Type, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/TypeTAXONOMY_TYPEPROVIDER_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/ProviderType, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/ProviderTypeCLASSIFICATIONVARCHARconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Classification, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/ClassificationSPECIALIZATIONVARCHARconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Specialization, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/SpecializationPRIORITYVARCHARconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Priority, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/PriorityTAXONOMY_PRIORITYSTR_TYPEVARCHARconfiguration/entityTypes/HCO/attributes/Taxonomy/attributes/StrTypeLKUP_IMS_STRUCTURE_TYPEDP_PRESENCEReltio URI: configuration/entityTypes/HCP/attributes/DPPresence, configuration/entityTypes/HCO/attributes/DPPresenceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDP_PRESENCE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCHANNEL_CODEVARCHARconfiguration/entityTypes/HCP/attributes/DPPresence/attributes/ChannelCode, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ChannelCodeLKUP_IMS_DP_CHANNELCHANNEL_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/DPPresence/attributes/ChannelName, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ChannelNameCHANNEL_URLVARCHARconfiguration/entityTypes/HCP/attributes/DPPresence/attributes/ChannelURL, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ChannelURLCHANNEL_REGISTRATION_DATEDATEconfiguration/entityTypes/HCP/attributes/DPPresence/attributes/ChannelRegistrationDate, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ChannelRegistrationDatePRESENCE_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/DPPresence/attributes/PresenceType, configuration/entityTypes/HCO/attributes/DPPresence/attributes/PresenceTypeLKUP_IMS_DP_PRESENCE_TYPEACTIVITYVARCHARconfiguration/entityTypes/HCP/attributes/DPPresence/attributes/Activity, configuration/entityTypes/HCO/attributes/DPPresence/attributes/ActivityLKUP_IMS_DP_SCORE_CODEAUDIENCEVARCHARconfiguration/entityTypes/HCP/attributes/DPPresence/attributes/Audience, configuration/entityTypes/HCO/attributes/DPPresence/attributes/AudienceLKUP_IMS_DP_SCORE_CODEDP_SUMMARYReltio URI: configuration/entityTypes/HCP/attributes/DPSummary, configuration/entityTypes/HCO/attributes/DPSummaryMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDP_SUMMARY_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSUMMARY_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/DPSummary/attributes/SummaryType, configuration/entityTypes/HCO/attributes/DPSummary/attributes/SummaryTypeLKUP_IMS_DP_SUMMARY_TYPESCORE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/DPSummary/attributes/ScoreCode, configuration/entityTypes/HCO/attributes/DPSummary/attributes/ScoreCodeLKUP_IMS_DP_SCORE_CODEADDITIONAL_ATTRIBUTESReltio URI: configuration/entityTypes/HCP/attributes/AdditionalAttributes, configuration/entityTypes/HCO/attributes/AdditionalAttributesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDITIONAL_ATTRIBUTES_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeATTRIBUTE_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AttributeName, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AttributeNameATTRIBUTE_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AttributeType, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AttributeTypeLKUP_IMS_TYPE_CODEATTRIBUTE_VALUEVARCHARconfiguration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AttributeValue, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AttributeValueATTRIBUTE_RANKVARCHARconfiguration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AttributeRank, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AttributeRankADDITIONAL_INFOVARCHARconfiguration/entityTypes/HCP/attributes/AdditionalAttributes/attributes/AdditionalInfo, configuration/entityTypes/HCO/attributes/AdditionalAttributes/attributes/AdditionalInfoDATA_QUALITYData QualityReltio URI: configuration/entityTypes/HCP/attributes/DataQuality, configuration/entityTypes/HCO/attributes/DataQualityMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDATA_QUALITY_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSEVERITY_LEVELVARCHARconfiguration/entityTypes/HCP/attributes/DataQuality/attributes/SeverityLevel, configuration/entityTypes/HCO/attributes/DataQuality/attributes/SeverityLevelLKUP_IMS_DQ_SEVERITYSOURCEVARCHARconfiguration/entityTypes/HCP/attributes/DataQuality/attributes/Source, configuration/entityTypes/HCO/attributes/DataQuality/attributes/SourceSCOREVARCHARconfiguration/entityTypes/HCP/attributes/DataQuality/attributes/Score, configuration/entityTypes/HCO/attributes/DataQuality/attributes/ScoreCLASSIFICATIONReltio URI: configuration/entityTypes/HCP/attributes/Classification, configuration/entityTypes/HCO/attributes/ClassificationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCLASSIFICATION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCLASSIFICATION_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Classification/attributes/ClassificationType, configuration/entityTypes/HCO/attributes/Classification/attributes/ClassificationTypeLKUP_IMS_CLASSIFICATION_TYPECLASSIFICATION_VALUEVARCHARconfiguration/entityTypes/HCP/attributes/Classification/attributes/ClassificationValue, configuration/entityTypes/HCO/attributes/Classification/attributes/ClassificationValueCLASSIFICATION_VALUE_NUMERIC_QUANTITYVARCHARconfiguration/entityTypes/HCP/attributes/Classification/attributes/ClassificationValueNumericQuantity, configuration/entityTypes/HCO/attributes/Classification/attributes/ClassificationValueNumericQuantitySTATUSVARCHARconfiguration/entityTypes/HCP/attributes/Classification/attributes/Status, configuration/entityTypes/HCO/attributes/Classification/attributes/StatusLKUP_IMS_CLASSIFICATION_STATUSEFFECTIVE_DATEDATEconfiguration/entityTypes/HCP/attributes/Classification/attributes/EffectiveDate, configuration/entityTypes/HCO/attributes/Classification/attributes/EffectiveDateEND_DATEDATEconfiguration/entityTypes/HCP/attributes/Classification/attributes/EndDate, configuration/entityTypes/HCO/attributes/Classification/attributes/EndDateNOTESVARCHARconfiguration/entityTypes/HCP/attributes/Classification/attributes/Notes, configuration/entityTypes/HCO/attributes/Classification/attributes/NotesTAGReltio URI: configuration/entityTypes/HCP/attributes/Tag, configuration/entityTypes/HCO/attributes/TagMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTAG_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTAG_TYPE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Tag/attributes/TagTypeCode, configuration/entityTypes/HCO/attributes/Tag/attributes/TagTypeCodeLKUP_IMS_TAG_TYPE_CODETAG_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Tag/attributes/TagCode, configuration/entityTypes/HCO/attributes/Tag/attributes/TagCodeSTATUSVARCHARconfiguration/entityTypes/HCP/attributes/Tag/attributes/Status, configuration/entityTypes/HCO/attributes/Tag/attributes/StatusLKUP_IMS_TAG_STATUSEFFECTIVE_DATEDATEconfiguration/entityTypes/HCP/attributes/Tag/attributes/EffectiveDate, configuration/entityTypes/HCO/attributes/Tag/attributes/EffectiveDateEND_DATEDATEconfiguration/entityTypes/HCP/attributes/Tag/attributes/EndDate, configuration/entityTypes/HCO/attributes/Tag/attributes/EndDateNOTESVARCHARconfiguration/entityTypes/HCP/attributes/Tag/attributes/Notes, configuration/entityTypes/HCO/attributes/Tag/attributes/NotesEXCLUSIONSReltio URI: configuration/entityTypes/HCP/attributes/Exclusions, configuration/entityTypes/HCO/attributes/ExclusionsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEXCLUSIONS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePRODUCT_IDVARCHARconfiguration/entityTypes/HCP/attributes/Exclusions/attributes/ProductId, configuration/entityTypes/HCO/attributes/Exclusions/attributes/ProductIdLKUP_IMS_PRODUCT_IDEXCLUSION_STATUS_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Exclusions/attributes/ExclusionStatusCode, configuration/entityTypes/HCO/attributes/Exclusions/attributes/ExclusionStatusCodeLKUP_IMS_EXCL_STATUS_CODEEFFECTIVE_DATEDATEconfiguration/entityTypes/HCP/attributes/Exclusions/attributes/EffectiveDate, configuration/entityTypes/HCO/attributes/Exclusions/attributes/EffectiveDateEND_DATEDATEconfiguration/entityTypes/HCP/attributes/Exclusions/attributes/EndDate, configuration/entityTypes/HCO/attributes/Exclusions/attributes/EndDateNOTESVARCHARconfiguration/entityTypes/HCP/attributes/Exclusions/attributes/Notes, configuration/entityTypes/HCO/attributes/Exclusions/attributes/NotesEXCLUSION_RULE_IDVARCHARconfiguration/entityTypes/HCP/attributes/Exclusions/attributes/ExclusionRuleId, configuration/entityTypes/HCO/attributes/Exclusions/attributes/ExclusionRuleIdACTIONReltio URI: configuration/entityTypes/HCP/attributes/Action, configuration/entityTypes/HCO/attributes/ActionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACTION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeACTION_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Action/attributes/ActionCode, configuration/entityTypes/HCO/attributes/Action/attributes/ActionCodeLKUP_IMS_ACTION_CODEACTION_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/Action/attributes/ActionName, configuration/entityTypes/HCO/attributes/Action/attributes/ActionNameACTION_REQUESTED_DATEDATEconfiguration/entityTypes/HCP/attributes/Action/attributes/ActionRequestedDate, configuration/entityTypes/HCO/attributes/Action/attributes/ActionRequestedDateACTION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Action/attributes/ActionStatus, configuration/entityTypes/HCO/attributes/Action/attributes/ActionStatusLKUP_IMS_ACTION_STATUSACTION_STATUS_DATEDATEconfiguration/entityTypes/HCP/attributes/Action/attributes/ActionStatusDate, configuration/entityTypes/HCO/attributes/Action/attributes/ActionStatusDateALTERNATE_NAMEReltio URI: configuration/entityTypes/HCP/attributes/AlternateName, configuration/entityTypes/HCO/attributes/AlternateNameMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameALTERNATE_NAME_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNAME_TYPE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/NameTypeCode, configuration/entityTypes/HCO/attributes/AlternateName/attributes/NameTypeCodeLKUP_IMS_NAME_TYPE_CODENAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/Name, configuration/entityTypes/HCO/attributes/AlternateName/attributes/NameFIRST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/FirstName, configuration/entityTypes/HCO/attributes/AlternateName/attributes/FirstNameMIDDLE_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/MiddleName, configuration/entityTypes/HCO/attributes/AlternateName/attributes/MiddleNameLAST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/LastName, configuration/entityTypes/HCO/attributes/AlternateName/attributes/LastNameSUFFIX_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/SuffixName, configuration/entityTypes/HCO/attributes/AlternateName/attributes/SuffixNameLANGUAGEReltio URI: configuration/entityTypes/HCP/attributes/Language, configuration/entityTypes/HCO/attributes/LanguageMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameLANGUAGE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeLANGUAGE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Language/attributes/LanguageCode, configuration/entityTypes/HCO/attributes/Language/attributes/LanguageCodePROFICIENCY_LEVELVARCHARconfiguration/entityTypes/HCP/attributes/Language/attributes/ProficiencyLevel, configuration/entityTypes/HCO/attributes/Language/attributes/ProficiencyLevelSOURCE_DATAReltio URI: configuration/entityTypes/HCP/attributes/SourceData, configuration/entityTypes/HCO/attributes/SourceDataMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSOURCE_DATA_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCLASS_OF_TRADE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/SourceData/attributes/ClassOfTradeCode, configuration/entityTypes/HCO/attributes/SourceData/attributes/ClassOfTradeCodeRAW_CLASS_OF_TRADE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/SourceData/attributes/RawClassOfTradeCode, configuration/entityTypes/HCO/attributes/SourceData/attributes/RawClassOfTradeCodeRAW_CLASS_OF_TRADE_DESCRIPTIONVARCHARconfiguration/entityTypes/HCP/attributes/SourceData/attributes/RawClassOfTradeDescription, configuration/entityTypes/HCO/attributes/SourceData/attributes/RawClassOfTradeDescriptionDATASET_IDENTIFIERVARCHARconfiguration/entityTypes/HCP/attributes/SourceData/attributes/DatasetIdentifier, configuration/entityTypes/HCO/attributes/SourceData/attributes/DatasetIdentifierDATASET_PARTY_IDENTIFIERVARCHARconfiguration/entityTypes/HCP/attributes/SourceData/attributes/DatasetPartyIdentifier, configuration/entityTypes/HCO/attributes/SourceData/attributes/DatasetPartyIdentifierPARTY_STATUS_CODEVARCHARconfiguration/entityTypes/HCP/attributes/SourceData/attributes/PartyStatusCode, configuration/entityTypes/HCO/attributes/SourceData/attributes/PartyStatusCodeNOTESReltio URI: configuration/entityTypes/HCP/attributes/Notes, configuration/entityTypes/HCO/attributes/NotesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameNOTES_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNOTE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Notes/attributes/NoteCode, configuration/entityTypes/HCO/attributes/Notes/attributes/NoteCodeLKUP_IMS_NOTE_CODENOTE_TEXTVARCHARconfiguration/entityTypes/HCP/attributes/Notes/attributes/NoteText, configuration/entityTypes/HCO/attributes/Notes/attributes/NoteTextHCOHealth care providerReltio URI: configuration/entityTypes/HCOMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNAMEVARCHARNameconfiguration/entityTypes/HCO/attributes/NameTYPE_CODEVARCHARCustomer Typeconfiguration/entityTypes/HCO/attributes/TypeCodeLKUP_IMS_HCO_CUST_TYPESUB_TYPE_CODEVARCHARCustomer Sub Typeconfiguration/entityTypes/HCO/attributes/SubTypeCodeLKUP_IMS_HCO_SUBTYPEEXCLUDE_FROM_MATCHVARCHARconfiguration/entityTypes/HCO/attributes/ExcludeFromMatchOTHER_NAMESVARCHAROther Namesconfiguration/entityTypes/HCO/attributes/OtherNamesSOURCE_IDVARCHARSource IDconfiguration/entityTypes/HCO/attributes/SourceIDVALIDATION_STATUSVARCHARconfiguration/entityTypes/HCO/attributes/ValidationStatusLKUP_IMS_VAL_STATUSORIGIN_SOURCEVARCHAROriginating Sourceconfiguration/entityTypes/HCO/attributes/OriginSourceCOUNTRY_CODEVARCHARCountry Codeconfiguration/entityTypes/HCO/attributes/CountryLKUP_IMS_COUNTRY_CODEFISCALVARCHARconfiguration/entityTypes/HCO/attributes/FiscalSITEVARCHARconfiguration/entityTypes/HCO/attributes/SiteGROUP_PRACTICEBOOLEANconfiguration/entityTypes/HCO/attributes/GroupPracticeGEN_FIRSTVARCHARStringconfiguration/entityTypes/HCO/attributes/GenFirstLKUP_IMS_HCO_GENFIRSTSREP_ACCESSVARCHARStringconfiguration/entityTypes/HCO/attributes/SrepAccessLKUP_IMS_HCO_SREPACCESSACCEPT_MEDICAREBOOLEANconfiguration/entityTypes/HCO/attributes/AcceptMedicareACCEPT_MEDICAIDBOOLEANconfiguration/entityTypes/HCO/attributes/AcceptMedicaidPERCENT_MEDICAREVARCHARconfiguration/entityTypes/HCO/attributes/PercentMedicarePERCENT_MEDICAIDVARCHARconfiguration/entityTypes/HCO/attributes/PercentMedicaidPARENT_COMPANYVARCHARReplacement Parent Satelliteconfiguration/entityTypes/HCO/attributes/ParentCompanyHEALTH_SYSTEM_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/HealthSystemNameVADODBOOLEANconfiguration/entityTypes/HCO/attributes/VADODGPO_MEMBERSHIPBOOLEANconfiguration/entityTypes/HCO/attributes/GPOMembershipACADEMICBOOLEANconfiguration/entityTypes/HCO/attributes/AcademicMKT_SEGMENT_CODEVARCHARconfiguration/entityTypes/HCO/attributes/MktSegmentCodeTOTAL_LICENSE_BEDSVARCHARconfiguration/entityTypes/HCO/attributes/TotalLicenseBedsTOTAL_CENSUS_BEDSVARCHARconfiguration/entityTypes/HCO/attributes/TotalCensusBedsNUM_PATIENTSVARCHARconfiguration/entityTypes/HCO/attributes/NumPatientsTOTAL_STAFFED_BEDSVARCHARconfiguration/entityTypes/HCO/attributes/TotalStaffedBedsTOTAL_SURGERIESVARCHARconfiguration/entityTypes/HCO/attributes/TotalSurgeriesTOTAL_PROCEDURESVARCHARconfiguration/entityTypes/HCO/attributes/TotalProceduresOR_SURGERIESVARCHARconfiguration/entityTypes/HCO/attributes/ORSurgeriesRESIDENT_PROGRAMBOOLEANconfiguration/entityTypes/HCO/attributes/ResidentProgramRESIDENT_COUNTVARCHARconfiguration/entityTypes/HCO/attributes/ResidentCountNUMS_OF_PROVIDERSVARCHARNum_of_providers displays the total number of distinct providers affiliated with a business. Current Data: Value between 1 and 422816configuration/entityTypes/HCO/attributes/NumsOfProvidersCORP_PARENT_NAMEVARCHARCorporate Parent Nameconfiguration/entityTypes/HCO/attributes/CorpParentNameMANAGER_HCO_IDVARCHARManager Hco Idconfiguration/entityTypes/HCO/attributes/ManagerHcoIdMANAGER_HCO_NAMEVARCHARManager Hco Nameconfiguration/entityTypes/HCO/attributes/ManagerHcoNameOWNER_SUB_NAMEVARCHAROwner Sub Nameconfiguration/entityTypes/HCO/attributes/OwnerSubNameFORMULARYVARCHARconfiguration/entityTypes/HCO/attributes/FormularyLKUP_IMS_HCO_FORMULARYE_MEDICAL_RECORDVARCHARconfiguration/entityTypes/HCO/attributes/EMedicalRecordLKUP_IMS_HCO_ERECE_PRESCRIBEVARCHARconfiguration/entityTypes/HCO/attributes/EPrescribeLKUP_IMS_HCO_ERECPAY_PERFORMVARCHARconfiguration/entityTypes/HCO/attributes/PayPerformLKUP_IMS_HCO_PAYPERFORMCMS_COVERED_FOR_TEACHINGBOOLEANconfiguration/entityTypes/HCO/attributes/CMSCoveredForTeachingCOMM_HOSPBOOLEANIndicates whether the facility is a short-term (average length of stay is less than 30 days) acute care, or non federal hospital. Values: Yes and Nullconfiguration/entityTypes/HCO/attributes/CommHospEMAIL_DOMAINVARCHARconfiguration/entityTypes/HCO/attributes/EmailDomainSTATUS_IMSVARCHARconfiguration/entityTypes/HCO/attributes/StatusIMSLKUP_IMS_STATUSDOING_BUSINESS_AS_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/DoingBusinessAsNameCOMPANY_TYPEVARCHARconfiguration/entityTypes/HCO/attributes/CompanyTypeLKUP_IMS_ORG_TYPECUSIPVARCHARconfiguration/entityTypes/HCO/attributes/CUSIPSECTOR_IMSVARCHARSectorconfiguration/entityTypes/HCO/attributes/SectorIMSLKUP_IMS_HCO_SECTORIMSINDUSTRYVARCHARconfiguration/entityTypes/HCO/attributes/IndustryFOUNDED_YEARVARCHARconfiguration/entityTypes/HCO/attributes/FoundedYearEND_YEARVARCHARconfiguration/entityTypes/HCO/attributes/EndYearIPO_YEARVARCHARconfiguration/entityTypes/HCO/attributes/IPOYearLEGAL_DOMICILEVARCHARState of Legal Domicileconfiguration/entityTypes/HCO/attributes/LegalDomicileOWNERSHIP_STATUSVARCHARconfiguration/entityTypes/HCO/attributes/OwnershipStatusLKUP_IMS_HCO_OWNERSHIPSTATUSPROFIT_STATUSVARCHARThe profit status of the facility. Values include: For Profit, Not For Profit, Government, Armed Forces, or NULL (If data is unknown or Not Confidential and Proprietary to IMS Health. Field Name Data Type Field Description Applicable).configuration/entityTypes/HCO/attributes/ProfitStatusLKUP_IMS_HCO_PROFITSTATUSCMIVARCHARCMI is the Case Mix Index for an organization. This is a government-assigned measure of the complexity of medical and surgical care provided to Medicare inpatients by a hospital under the prospective payment system (PPS). It factors in a hospital?s use of technology for patient care and medical services? level of acuity required by the patient population.configuration/entityTypes/HCO/attributes/CMISOURCE_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/SourceNameSUB_SOURCE_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/SubSourceNameDEA_BUSINESS_ACTIVITYVARCHARconfiguration/entityTypes/HCO/attributes/DEABusinessActivityIMAGE_LINKSVARCHARconfiguration/entityTypes/HCO/attributes/ImageLinksVIDEO_LINKSVARCHARconfiguration/entityTypes/HCO/attributes/VideoLinksDOCUMENT_LINKSVARCHARconfiguration/entityTypes/HCO/attributes/DocumentLinksWEBSITE_URLVARCHARconfiguration/entityTypes/HCO/attributes/WebsiteURLTAX_IDVARCHARconfiguration/entityTypes/HCO/attributes/TaxIDDESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/DescriptionSTATUS_UPDATE_DATEDATEconfiguration/entityTypes/HCO/attributes/StatusUpdateDateSTATUS_REASON_CODEVARCHARconfiguration/entityTypes/HCO/attributes/StatusReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODECOMMENTERSVARCHARCommentersconfiguration/entityTypes/HCO/attributes/CommentersCLIENT_TYPE_CODEVARCHARClient Customer Typeconfiguration/entityTypes/HCO/attributes/ClientTypeCodeLKUP_IMS_HCO_CLIENT_CUST_TYPEOFFICIAL_NAMEVARCHAROfficial Nameconfiguration/entityTypes/HCO/attributes/OfficialNameVALIDATION_CHANGE_REASONVARCHARconfiguration/entityTypes/HCO/attributes/ValidationChangeReasonLKUP_IMS_VAL_STATUS_CHANGE_REASONVALIDATION_CHANGE_DATEDATEconfiguration/entityTypes/HCO/attributes/ValidationChangeDateCREATE_DATEDATEconfiguration/entityTypes/HCO/attributes/CreateDateUPDATE_DATEDATEconfiguration/entityTypes/HCO/attributes/UpdateDateCHECK_DATEDATEconfiguration/entityTypes/HCO/attributes/CheckDateSTATE_CODEVARCHARSituation of the workplace: Open/Closedconfiguration/entityTypes/HCO/attributes/StateCodeLKUP_IMS_PROFILE_STATESTATE_DATEDATEDate when state of the record was last modified.configuration/entityTypes/HCO/attributes/StateDateSTATUS_CHANGE_REASONVARCHARReason the status of the Organization changedconfiguration/entityTypes/HCO/attributes/StatusChangeReasonNUM_EMPLOYEESVARCHARconfiguration/entityTypes/HCO/attributes/NumEmployeesNUM_MED_EMPLOYEESVARCHARconfiguration/entityTypes/HCO/attributes/NumMedEmployeesTOTAL_BEDS_INTENSIVE_CAREVARCHARconfiguration/entityTypes/HCO/attributes/TotalBedsIntensiveCareNUM_EXAMINATION_ROOMVARCHARconfiguration/entityTypes/HCO/attributes/NumExaminationRoomNUM_AFFILIATED_SITESVARCHARconfiguration/entityTypes/HCO/attributes/NumAffiliatedSitesNUM_ENROLLED_MEMBERSVARCHARconfiguration/entityTypes/HCO/attributes/NumEnrolledMembersNUM_IN_PATIENTSVARCHARconfiguration/entityTypes/HCO/attributes/NumInPatientsNUM_OUT_PATIENTSVARCHARconfiguration/entityTypes/HCO/attributes/NumOutPatientsNUM_OPERATING_ROOMSVARCHARconfiguration/entityTypes/HCO/attributes/NumOperatingRoomsNUM_PATIENTS_X_WEEKVARCHARconfiguration/entityTypes/HCO/attributes/NumPatientsXWeekACT_TYPE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/ActTypeCodeLKUP_IMS_ACTIVITY_TYPEDISPENSE_DRUGSBOOLEANconfiguration/entityTypes/HCO/attributes/DispenseDrugsNUM_PRESCRIBERSVARCHARconfiguration/entityTypes/HCO/attributes/NumPrescribersPATIENTS_X_YEARVARCHARconfiguration/entityTypes/HCO/attributes/PatientsXYearACCEPTS_NEW_PATIENTSVARCHARY/N field indicating whether the workplace accepts new patientsconfiguration/entityTypes/HCO/attributes/AcceptsNewPatientsEXTERNAL_INFORMATION_URLVARCHARconfiguration/entityTypes/HCO/attributes/ExternalInformationURLMATCH_STATUS_CODEVARCHARconfiguration/entityTypes/HCO/attributes/MatchStatusCodeLKUP_IMS_MATCH_STATUS_CODESUBSCRIPTION_FLAG1BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag1SUBSCRIPTION_FLAG2BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag2SUBSCRIPTION_FLAG3BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag3SUBSCRIPTION_FLAG4BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag4SUBSCRIPTION_FLAG5BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag5SUBSCRIPTION_FLAG6BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag6SUBSCRIPTION_FLAG7BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag7SUBSCRIPTION_FLAG8BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag8SUBSCRIPTION_FLAG9BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag9SUBSCRIPTION_FLAG10BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/entityTypes/HCO/attributes/SubscriptionFlag10ROLE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/RoleCodeLKUP_IMS_ORG_ROLE_CODEACTIVATION_DATEVARCHARconfiguration/entityTypes/HCO/attributes/ActivationDatePARTY_IDVARCHARconfiguration/entityTypes/HCO/attributes/PartyIDLAST_VERIFICATION_STATUSVARCHARconfiguration/entityTypes/HCO/attributes/LastVerificationStatusLAST_VERIFICATION_DATEDATEconfiguration/entityTypes/HCO/attributes/LastVerificationDateEFFECTIVE_DATEDATEconfiguration/entityTypes/HCO/attributes/EffectiveDateEND_DATEDATEconfiguration/entityTypes/HCO/attributes/EndDatePARTY_LOCALIZATION_CODEVARCHARconfiguration/entityTypes/HCO/attributes/PartyLocalizationCodeMATCH_PARTY_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/MatchPartyNameDELETE_ENTITYBOOLEANDeleteEntity flag to identify GDPR compliant dataconfiguration/entityTypes/HCO/attributes/DeleteEntityOK_VR_TRIGGERVARCHARconfiguration/entityTypes/HCO/attributes/OK_VR_TriggerLKUP_IMS_SEND_FOR_VALIDATIONHCO_MAIN_HCO_CLASSOF_TRADE_NReltio URI: configuration/entityTypes/HCO/attributes/ClassofTradeNMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameMAINHCO_URIVARCHARgenerated key descriptionCLASSOFTRADEN_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePRIORITYVARCHARNumeric code for the primary class of tradeconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/PriorityCLASSIFICATIONVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/ClassificationLKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATIONFACILITY_TYPEVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityTypeLKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPESPECIALTYVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/SpecialtyLKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTYHCO_ADDRESS_UNITReltio URI: configuration/entityTypes/Location/attributes/UnitMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionUNIT_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeUNIT_NAMEVARCHARconfiguration/entityTypes/Location/attributes/Unit/attributes/UnitNameUNIT_VALUEVARCHARconfiguration/entityTypes/Location/attributes/Unit/attributes/UnitValueHCO_ADDRESS_BRICKReltio URI: configuration/entityTypes/Location/attributes/BrickMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARgenerated key descriptionBRICK_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARconfiguration/entityTypes/Location/attributes/Brick/attributes/TypeLKUP_IMS_BRICK_TYPEBRICK_VALUEVARCHARconfiguration/entityTypes/Location/attributes/Brick/attributes/BrickValueLKUP_IMS_BRICK_VALUESORT_ORDERVARCHARconfiguration/entityTypes/Location/attributes/Brick/attributes/SortOrderKEY_FINANCIAL_FIGURES_OVERVIEWReltio URI: configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverviewMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameKEY_FINANCIAL_FIGURES_OVERVIEW_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeFINANCIAL_STATEMENT_TO_DATEDATEconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/FinancialStatementToDateFINANCIAL_PERIOD_DURATIONVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/FinancialPeriodDurationSALES_REVENUE_CURRENCYVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrencySALES_REVENUE_CURRENCY_CODEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrencyCodeSALES_REVENUE_RELIABILITY_CODEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueReliabilityCodeSALES_REVENUE_UNIT_OF_SIZEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueUnitOfSizeSALES_REVENUE_AMOUNTVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueAmountPROFIT_OR_LOSS_CURRENCYVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossCurrencyPROFIT_OR_LOSS_RELIABILITY_TEXTVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossReliabilityTextPROFIT_OR_LOSS_UNIT_OF_SIZEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossUnitOfSizePROFIT_OR_LOSS_AMOUNTVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossAmountSALES_TURNOVER_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesTurnoverGrowthRateSALES3YRY_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Sales3YryGrowthRateSALES5YRY_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Sales5YryGrowthRateEMPLOYEE3YRY_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Employee3YryGrowthRateEMPLOYEE5YRY_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Employee5YryGrowthRateCLASSOF_TRADE_NReltio URI: configuration/entityTypes/HCO/attributes/ClassofTradeNMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCLASSOF_TRADE_N_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePRIORITYVARCHARNumeric code for the primary class of tradeconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/PriorityCLASSIFICATIONVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/ClassificationLKUP_IMS_HCO_CLASSOFTRADEN_CLASSIFICATIONFACILITY_TYPEVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityTypeLKUP_IMS_HCO_CLASSOFTRADEN_FACILITYTYPESPECIALTYVARCHARconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/SpecialtyLKUP_IMS_HCO_CLASSOFTRADEN_SPECIALTYSPECIALTYDO NOT USE THIS ATTRIBUTE - will be deprecatedReltio URI: configuration/entityTypes/HCO/attributes/SpecialtyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPECIALTY_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSPECIALTYVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCO/attributes/Specialty/attributes/SpecialtyTYPEVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCO/attributes/Specialty/attributes/TypeGSA_EXCLUSIONReltio URI: configuration/entityTypes/HCO/attributes/GSAExclusionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameGSA_EXCLUSION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/SanctionIdORGANIZATION_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/OrganizationNameADDRESS_LINE1VARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/AddressLine1ADDRESS_LINE2VARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/AddressLine2CITYVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/CitySTATEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/StateZIPVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/ZipACTION_DATEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/ActionDateTERM_DATEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/TermDateAGENCYVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/AgencyCONFIDENCEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/ConfidenceOIG_EXCLUSIONReltio URI: configuration/entityTypes/HCO/attributes/OIGExclusionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameOIG_EXCLUSION_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/SanctionIdACTION_CODEVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionCodeACTION_DESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionDescriptionBOARD_CODEVARCHARCourt case board idconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/BoardCodeBOARD_DESCVARCHARcourt case board descriptionconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/BoardDescACTION_DATEDATEconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionDateOFFENSE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/OffenseCodeOFFENSE_DESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/OffenseDescriptionBRICKReltio URI: configuration/entityTypes/HCO/attributes/BrickMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBRICK_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARconfiguration/entityTypes/HCO/attributes/Brick/attributes/TypeLKUP_IMS_BRICK_TYPEBRICK_VALUEVARCHARconfiguration/entityTypes/HCO/attributes/Brick/attributes/BrickValueLKUP_IMS_BRICK_VALUEEMRReltio URI: configuration/entityTypes/HCO/attributes/EMRMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMR_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNOTESBOOLEANY/N field indicating whether workplace uses EMR software to write notesconfiguration/entityTypes/HCO/attributes/EMR/attributes/NotesPRESCRIBESBOOLEANY/N field indicating whether the workplace uses EMR software to write a prescriptionsconfiguration/entityTypes/HCO/attributes/EMR/attributes/PrescribesLKUP_IMS_EMR_PRESCRIBESELABS_X_RAYSBOOLEANY/N indicating whether the workplace uses EMR software for eLabs/Xraysconfiguration/entityTypes/HCO/attributes/EMR/attributes/ElabsXRaysLKUP_IMS_EMR_ELABS_XRAYSNUMBER_OF_PHYSICIANSVARCHARNumber of physicians that use EMR software in the workplaceconfiguration/entityTypes/HCO/attributes/EMR/attributes/NumberOfPhysiciansPOLICYMAKERVARCHARIndividual who makes decisions regarding EMR softwareconfiguration/entityTypes/HCO/attributes/EMR/attributes/PolicymakerSOFTWARE_TYPEVARCHARName of the EMR software used at the workplaceconfiguration/entityTypes/HCO/attributes/EMR/attributes/SoftwareTypeADOPTIONVARCHARWhen the EMR software was adopted at the workplaceconfiguration/entityTypes/HCO/attributes/EMR/attributes/AdoptionBUYING_FACTORVARCHARBuying factor which influenced the workplace's decision to purchase the EMRconfiguration/entityTypes/HCO/attributes/EMR/attributes/BuyingFactorOWNERVARCHARIndividual who made the decision to purchase EMR softwareconfiguration/entityTypes/HCO/attributes/EMR/attributes/OwnerAWAREBOOLEANconfiguration/entityTypes/HCO/attributes/EMR/attributes/AwareLKUP_IMS_EMR_AWARESOFTWAREBOOLEANconfiguration/entityTypes/HCO/attributes/EMR/attributes/SoftwareLKUP_IMS_EMR_SOFTWAREVENDORVARCHARconfiguration/entityTypes/HCO/attributes/EMR/attributes/VendorLKUP_IMS_EMR_VENDORBUSINESS_HOURSReltio URI: configuration/entityTypes/HCO/attributes/BusinessHoursMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBUSINESS_HOURS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDAYVARCHARconfiguration/entityTypes/HCO/attributes/BusinessHours/attributes/DayPERIODVARCHARconfiguration/entityTypes/HCO/attributes/BusinessHours/attributes/PeriodTIME_SLOTVARCHARconfiguration/entityTypes/HCO/attributes/BusinessHours/attributes/TimeSlotSTART_TIMEVARCHARconfiguration/entityTypes/HCO/attributes/BusinessHours/attributes/StartTimeEND_TIMEVARCHARconfiguration/entityTypes/HCO/attributes/BusinessHours/attributes/EndTimeAPPOINTMENT_ONLYBOOLEANconfiguration/entityTypes/HCO/attributes/BusinessHours/attributes/AppointmentOnlyPERIOD_STARTVARCHARconfiguration/entityTypes/HCO/attributes/BusinessHours/attributes/PeriodStartPERIOD_ENDVARCHARconfiguration/entityTypes/HCO/attributes/BusinessHours/attributes/PeriodEndACO_DETAILSACO DetailsReltio URI: configuration/entityTypes/HCO/attributes/ACODetailsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACO_DETAILS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeACO_TYPE_CODEVARCHARAcoTypeCodeconfiguration/entityTypes/HCO/attributes/ACODetails/attributes/AcoTypeCodeLKUP_IMS_ACO_TYPEACO_TYPE_CATGVARCHARAcoTypeCatgconfiguration/entityTypes/HCO/attributes/ACODetails/attributes/AcoTypeCatgACO_TYPE_MDELVARCHARAcoTypeMdelconfiguration/entityTypes/HCO/attributes/ACODetails/attributes/AcoTypeMdelACO_DETAIL_IDVARCHARAcoDetailIdconfiguration/entityTypes/HCO/attributes/ACODetails/attributes/AcoDetailIdACO_DETAIL_CODEVARCHARAcoDetailCodeconfiguration/entityTypes/HCO/attributes/ACODetails/attributes/AcoDetailCodeLKUP_IMS_ACO_DETAILACO_DETAIL_GROUP_CODEVARCHARAcoDetailGroupCodeconfiguration/entityTypes/HCO/attributes/ACODetails/attributes/AcoDetailGroupCodeLKUP_IMS_ACO_DETAIL_GROUPACO_VALVARCHARAcoValconfiguration/entityTypes/HCO/attributes/ACODetails/attributes/AcoValTRADE_STYLE_NAMEReltio URI: configuration/entityTypes/HCO/attributes/TradeStyleNameMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTRADE_STYLE_NAME_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeORGANIZATION_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/OrganizationNameLANGUAGE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/LanguageCodeFORMER_ORGANIZATION_PRIMARY_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/FormerOrganizationPrimaryNameDISPLAY_SEQUENCEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/DisplaySequenceTYPEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/TypePRIOR_DUNS_NUMBERReltio URI: configuration/entityTypes/HCO/attributes/PriorDUNSNUmberMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePRIOR_DUNSN_UMBER_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTRANSFER_DUNS_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferDUNSNumberTRANSFER_REASON_TEXTVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferReasonTextTRANSFER_REASON_CODEVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferReasonCodeTRANSFER_DATEVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferDateTRANSFERRED_FROM_DUNS_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferredFromDUNSNumberTRANSFERRED_TO_DUNS_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferredToDUNSNumberINDUSTRY_CODEReltio URI: configuration/entityTypes/HCO/attributes/IndustryCodeMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameINDUSTRY_CODE_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDNB_CODEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/DNBCodeINDUSTRY_CODEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeINDUSTRY_CODE_DESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeDescriptionINDUSTRY_CODE_LANGUAGE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeLanguageCodeINDUSTRY_CODE_WRITING_SCRIPTVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeWritingScriptDISPLAY_SEQUENCEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/DisplaySequenceSALES_PERCENTAGEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/SalesPercentageTYPEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/TypeINDUSTRY_TYPE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryTypeCodeIMPORT_EXPORT_AGENTVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/ImportExportAgentACTIVITIES_AND_OPERATIONSReltio URI: configuration/entityTypes/HCO/attributes/ActivitiesAndOperationsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACTIVITIES_AND_OPERATIONS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeLINE_OF_BUSINESS_DESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/LineOfBusinessDescriptionLANGUAGE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/LanguageCodeWRITING_SCRIPT_CODEVARCHARconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/WritingScriptCodeIMPORT_INDICATORBOOLEANconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/ImportIndicatorEXPORT_INDICATORBOOLEANconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/ExportIndicatorAGENT_INDICATORBOOLEANconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/AgentIndicatorEMPLOYEE_DETAILSReltio URI: configuration/entityTypes/HCO/attributes/EmployeeDetailsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMPLOYEE_DETAILS_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeINDIVIDUAL_EMPLOYEE_FIGURES_DATEVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualEmployeeFiguresDateINDIVIDUAL_TOTAL_EMPLOYEE_QUANTITYVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualTotalEmployeeQuantityINDIVIDUAL_RELIABILITY_TEXTVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualReliabilityTextTOTAL_EMPLOYEE_QUANTITYVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/TotalEmployeeQuantityTOTAL_EMPLOYEE_RELIABILITYVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/TotalEmployeeReliabilityPRINCIPALS_INCLUDEDVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/PrincipalsIncludedMATCH_QUALITYReltio URI: configuration/entityTypes/HCO/attributes/MatchQualityMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameMATCH_QUALITY_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCONFIDENCE_CODEVARCHARDnB Match Quality Confidence Codeconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/ConfidenceCodeDISPLAY_SEQUENCEVARCHARDnB Match Quality Display Sequenceconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/DisplaySequenceMATCH_CODEVARCHARconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/MatchCodeBEMFABVARCHARconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/BEMFABMATCH_GRADEVARCHARconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/MatchGradeORGANIZATION_DETAILReltio URI: configuration/entityTypes/HCO/attributes/OrganizationDetailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameORGANIZATION_DETAIL_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeMEMBER_ROLEVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/MemberRoleSTANDALONEBOOLEANconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/StandaloneCONTROL_OWNERSHIP_DATEDATEconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/ControlOwnershipDateOPERATING_STATUSVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/OperatingStatusSTART_YEARVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/StartYearFRANCHISE_OPERATION_TYPEVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/FranchiseOperationTypeBONEYARD_ORGANIZATIONBOOLEANconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/BoneyardOrganizationOPERATING_STATUS_COMMENTVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/OperatingStatusCommentDUNS_HIERARCHYReltio URI: configuration/entityTypes/HCO/attributes/DUNSHierarchyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDUNS_HIERARCHY_URIVARCHARgenerated key descriptionENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeGLOBAL_ULTIMATE_DUNSVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/GlobalUltimateDUNSGLOBAL_ULTIMATE_ORGANIZATIONVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/GlobalUltimateOrganizationDOMESTIC_ULTIMATE_DUNSVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/DomesticUltimateDUNSDOMESTIC_ULTIMATE_ORGANIZATIONVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/DomesticUltimateOrganizationPARENT_DUNSVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/ParentDUNSPARENT_ORGANIZATIONVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/ParentOrganizationHEADQUARTERS_DUNSVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/HeadquartersDUNSHEADQUARTERS_ORGANIZATIONVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/HeadquartersOrganizationAFFILIATIONSReltio URI: configuration/relationTypes/HasHealthCareRole, configuration/relationTypes/AffiliatedPurchasing, configuration/relationTypes/Activity, configuration/relationTypes/ManagedMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameRELATION_URIVARCHARReltio Relation URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagRELATION_TYPEVARCHARReltio Relation TypeSTART_ENTITY_URIVARCHARReltio Start Entity URIEND_ENTITY_URIVARCHARReltio End Entity URIREL_GROUPVARCHARHCRS relation group from the relationship type, each rel group refers to one relation idconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelGroup, configuration/relationTypes/Managed/attributes/RelGroupLKUP_IMS_RELGROUP_TYPEREL_ORDER_AFFILIATEDPURCHASINGVARCHAROrderconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelOrderSTATUS_REASON_CODEVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/StatusReasonCode, configuration/relationTypes/Activity/attributes/StatusReasonCode, configuration/relationTypes/Managed/attributes/StatusReasonCodeLKUP_IMS_SRC_DEACTIVE_REASON_CODESTATUS_UPDATE_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/StatusUpdateDate, configuration/relationTypes/Activity/attributes/StatusUpdateDate, configuration/relationTypes/Managed/attributes/StatusUpdateDateVALIDATION_CHANGE_REASONVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/ValidationChangeReason, configuration/relationTypes/Activity/attributes/ValidationChangeReason, configuration/relationTypes/Managed/attributes/ValidationChangeReasonLKUP_IMS_VAL_STATUS_CHANGE_REASONVALIDATION_CHANGE_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/ValidationChangeDate, configuration/relationTypes/Activity/attributes/ValidationChangeDate, configuration/relationTypes/Managed/attributes/ValidationChangeDateVALIDATION_STATUSVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/ValidationStatus, configuration/relationTypes/Activity/attributes/ValidationStatus, configuration/relationTypes/Managed/attributes/ValidationStatusLKUP_IMS_VAL_STATUSAFFILIATION_STATUSVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/AffiliationStatus, configuration/relationTypes/Activity/attributes/AffiliationStatus, configuration/relationTypes/Managed/attributes/AffiliationStatusLKUP_IMS_STATUSCOUNTRYVARCHARCountry Codeconfiguration/relationTypes/AffiliatedPurchasing/attributes/Country, configuration/relationTypes/Activity/attributes/Country, configuration/relationTypes/Managed/attributes/CountryLKUP_IMS_COUNTRY_CODEAFFILIATION_NAMEVARCHARAffiliation Nameconfiguration/relationTypes/AffiliatedPurchasing/attributes/AffiliationName, configuration/relationTypes/Activity/attributes/AffiliationNameSUBSCRIPTION_FLAG1BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag1, configuration/relationTypes/Activity/attributes/SubscriptionFlag1, configuration/relationTypes/Managed/attributes/SubscriptionFlag1SUBSCRIPTION_FLAG2BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag2, configuration/relationTypes/Activity/attributes/SubscriptionFlag2, configuration/relationTypes/Managed/attributes/SubscriptionFlag2SUBSCRIPTION_FLAG3BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag3, configuration/relationTypes/Activity/attributes/SubscriptionFlag3, configuration/relationTypes/Managed/attributes/SubscriptionFlag3SUBSCRIPTION_FLAG4BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag4, configuration/relationTypes/Activity/attributes/SubscriptionFlag4, configuration/relationTypes/Managed/attributes/SubscriptionFlag4SUBSCRIPTION_FLAG5BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag5, configuration/relationTypes/Activity/attributes/SubscriptionFlag5, configuration/relationTypes/Managed/attributes/SubscriptionFlag5SUBSCRIPTION_FLAG6BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag6, configuration/relationTypes/Activity/attributes/SubscriptionFlag6, configuration/relationTypes/Managed/attributes/SubscriptionFlag6SUBSCRIPTION_FLAG7BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag7, configuration/relationTypes/Activity/attributes/SubscriptionFlag7, configuration/relationTypes/Managed/attributes/SubscriptionFlag7SUBSCRIPTION_FLAG8BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag8, configuration/relationTypes/Activity/attributes/SubscriptionFlag8, configuration/relationTypes/Managed/attributes/SubscriptionFlag8SUBSCRIPTION_FLAG9BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag9, configuration/relationTypes/Activity/attributes/SubscriptionFlag9, configuration/relationTypes/Managed/attributes/SubscriptionFlag9SUBSCRIPTION_FLAG10BOOLEANUsed for setting a profile eligible for certain subscriptionconfiguration/relationTypes/AffiliatedPurchasing/attributes/SubscriptionFlag10, configuration/relationTypes/Activity/attributes/SubscriptionFlag10, configuration/relationTypes/Managed/attributes/SubscriptionFlag10BEST_RELATIONSHIP_INDICATORVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/BestRelationshipIndicator, configuration/relationTypes/Activity/attributes/BestRelationshipIndicator, configuration/relationTypes/Managed/attributes/BestRelationshipIndicatorLKUP_IMS_YES_NORELATIONSHIP_RANKVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipRank, configuration/relationTypes/Activity/attributes/RelationshipRank, configuration/relationTypes/Managed/attributes/RelationshipRankRELATIONSHIP_VIEW_CODEVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipViewCode, configuration/relationTypes/Activity/attributes/RelationshipViewCode, configuration/relationTypes/Managed/attributes/RelationshipViewCodeRELATIONSHIP_VIEW_TYPE_CODEVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipViewTypeCode, configuration/relationTypes/Activity/attributes/RelationshipViewTypeCode, configuration/relationTypes/Managed/attributes/RelationshipViewTypeCodeRELATIONSHIP_STATUSVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipStatus, configuration/relationTypes/Activity/attributes/RelationshipStatus, configuration/relationTypes/Managed/attributes/RelationshipStatusLKUP_IMS_RELATIONSHIP_STATUSRELATIONSHIP_CREATE_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipCreateDate, configuration/relationTypes/Activity/attributes/RelationshipCreateDate, configuration/relationTypes/Managed/attributes/RelationshipCreateDateUPDATE_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/UpdateDate, configuration/relationTypes/Activity/attributes/UpdateDate, configuration/relationTypes/Managed/attributes/UpdateDateRELATIONSHIP_START_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipStartDate, configuration/relationTypes/Activity/attributes/RelationshipStartDate, configuration/relationTypes/Managed/attributes/RelationshipStartDateRELATIONSHIP_END_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/RelationshipEndDate, configuration/relationTypes/Activity/attributes/RelationshipEndDate, configuration/relationTypes/Managed/attributes/RelationshipEndDateCHECKED_DATEDATEconfiguration/relationTypes/Activity/attributes/CheckedDatePREFERRED_MAIL_INDICATORBOOLEANconfiguration/relationTypes/Activity/attributes/PreferredMailIndicatorPREFERRED_VISIT_INDICATORBOOLEANconfiguration/relationTypes/Activity/attributes/PreferredVisitIndicatorCOMMITTEE_MEMBERVARCHARconfiguration/relationTypes/Activity/attributes/CommitteeMemberLKUP_IMS_MEMBER_MED_COMMITTEEAPPOINTMENT_REQUIREDBOOLEANconfiguration/relationTypes/Activity/attributes/AppointmentRequiredAFFILIATION_TYPE_CODEVARCHARAffiliation Type Codeconfiguration/relationTypes/Activity/attributes/AffiliationTypeCodeWORKING_STATUSVARCHARconfiguration/relationTypes/Activity/attributes/WorkingStatusLKUP_IMS_WORKING_STATUSTITLEVARCHARconfiguration/relationTypes/Activity/attributes/TitleLKUP_IMS_PROF_TITLERANKVARCHARconfiguration/relationTypes/Activity/attributes/RankPRIMARY_AFFILIATION_INDICATORBOOLEANconfiguration/relationTypes/Activity/attributes/PrimaryAffiliationIndicatorACT_WEBSITE_URLVARCHARconfiguration/relationTypes/Activity/attributes/ActWebsiteURLACT_VALIDATION_STATUSVARCHARconfiguration/relationTypes/Activity/attributes/ActValidationStatusLKUP_IMS_VAL_STATUSPREF_OR_ACTIVEVARCHARconfiguration/relationTypes/Activity/attributes/PrefOrActiveCOMMENTERSVARCHARCommentersconfiguration/relationTypes/Activity/attributes/CommentersREL_ORDER_MANAGEDBOOLEANOrderconfiguration/relationTypes/Managed/attributes/RelOrderPURCHASING_CLASSIFICATIONReltio URI: configuration/relationTypes/AffiliatedPurchasing/attributes/ClassificationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCLASSIFICATION_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URICLASSIFICATION_TYPEVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationTypeLKUP_IMS_CLASSIFICATION_TYPECLASSIFICATION_INDICATORVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationIndicatorLKUP_IMS_CLASSIFICATION_INDICATORCLASSIFICATION_VALUEVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationValueCLASSIFICATION_VALUE_NUMERIC_QUANTITYVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/ClassificationValueNumericQuantitySTATUSVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/StatusLKUP_IMS_CLASSIFICATION_STATUSEFFECTIVE_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/EffectiveDateEND_DATEDATEconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/EndDateNOTESVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/Classification/attributes/NotesPURCHASING_SOURCE_DATAReltio URI: configuration/relationTypes/AffiliatedPurchasing/attributes/SourceDataMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSOURCE_DATA_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIDATASET_IDENTIFIERVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/SourceData/attributes/DatasetIdentifierSTART_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/SourceData/attributes/StartObjectDatasetPartyIdentifierEND_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/SourceData/attributes/EndObjectDatasetPartyIdentifierRANKVARCHARconfiguration/relationTypes/AffiliatedPurchasing/attributes/SourceData/attributes/RankACTIVITY_PHONEReltio URI: configuration/relationTypes/Activity/attributes/ActPhoneMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACT_PHONE_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URITYPE_IMSVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/TypeIMSLKUP_IMS_COMMUNICATION_TYPENUMBERVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/NumberEXTENSIONVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/ExtensionRANKVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/RankCOUNTRY_CODEVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/CountryCodeLKUP_IMS_COUNTRY_CODEAREA_CODEVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/AreaCodeLOCAL_NUMBERVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/LocalNumberFORMATTED_NUMBERVARCHARFormatted number of the phoneconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/FormattedNumberVALIDATION_STATUSVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/ValidationStatusLINE_TYPEVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/LineTypeFORMAT_MASKVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/FormatMaskDIGIT_COUNTVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/DigitCountGEO_AREAVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/GeoAreaGEO_COUNTRYVARCHARconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/GeoCountryACTIVEBOOLEANDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/relationTypes/Activity/attributes/ActPhone/attributes/ActiveACTIVITY_PRIVACY_PREFERENCESReltio URI: configuration/relationTypes/Activity/attributes/PrivacyPreferencesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePRIVACY_PREFERENCES_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIPHONE_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/PhoneOptOutALLOWED_TO_CONTACTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/AllowedToContactEMAIL_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/EmailOptOutMAIL_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/MailOptOutFAX_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/FaxOptOutREMOTE_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/RemoteOptOutOPT_OUT_ONEKEYBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/OptOutOnekeyVISIT_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/PrivacyPreferences/attributes/VisitOptOutACTIVITY_SPECIALITIESReltio URI: configuration/relationTypes/Activity/attributes/SpecialitiesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPECIALITIES_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URISPECIALTY_TYPEVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SpecialtyTypeLKUP_IMS_SPECIALTY_TYPESPECIALTYVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SpecialtyLKUP_IMS_SPECIALTYEMAIL_OPT_OUTBOOLEANconfiguration/relationTypes/Activity/attributes/Specialities/attributes/EmailOptOutDESCVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/DescGROUPVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/GroupSOURCE_CDVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SourceCDSPECIALTY_DETAILVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SpecialtyDetailPROFESSION_CODEVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/ProfessionCodeRANKVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/RankPRIMARY_SPECIALTY_FLAGBOOLEANPrimary Specialty flag to be populated by client teams according to business rulesconfiguration/relationTypes/Activity/attributes/Specialities/attributes/PrimarySpecialtyFlagSORT_ORDERVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SortOrderBEST_RECORDVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/BestRecordSUB_SPECIALTYVARCHARconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SubSpecialtyLKUP_IMS_SPECIALTYSUB_SPECIALTY_RANKVARCHARSubSpecialty Rankconfiguration/relationTypes/Activity/attributes/Specialities/attributes/SubSpecialtyRankACTIVITY_IDENTIFIERSReltio URI: configuration/relationTypes/Activity/attributes/ActIdentifiersMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACT_IDENTIFIERS_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIIDVARCHARconfiguration/relationTypes/Activity/attributes/ActIdentifiers/attributes/IDTYPEVARCHARconfiguration/relationTypes/Activity/attributes/ActIdentifiers/attributes/TypeLKUP_IMS_HCP_IDENTIFIER_TYPEORDERVARCHARDisplays the order of priority for an MPN for those facilities that share an MPN. Valid values are: P ?the MPN on a business record is the primary identifier for the business and O ?the MPN is a secondary identifier. (Using P for the MPN supports aggregating clinical volumes and avoids double counting).configuration/relationTypes/Activity/attributes/ActIdentifiers/attributes/OrderAUTHORIZATION_STATUSVARCHARAuthorization Statusconfiguration/relationTypes/Activity/attributes/ActIdentifiers/attributes/AuthorizationStatusLKUP_IMS_IDENTIFIER_STATUSNATIONAL_ID_ATTRIBUTEVARCHARconfiguration/relationTypes/Activity/attributes/ActIdentifiers/attributes/NationalIdAttributeACTIVITY_ADDITIONAL_ATTRIBUTESReltio URI: configuration/relationTypes/Activity/attributes/AdditionalAttributesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDITIONAL_ATTRIBUTES_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIATTRIBUTE_NAMEVARCHARconfiguration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeNameATTRIBUTE_TYPEVARCHARconfiguration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeTypeLKUP_IMS_TYPE_CODEATTRIBUTE_VALUEVARCHARconfiguration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeValueATTRIBUTE_RANKVARCHARconfiguration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AttributeRankADDITIONAL_INFOVARCHARconfiguration/relationTypes/Activity/attributes/AdditionalAttributes/attributes/AdditionalInfoACTIVITY_BUSINESS_HOURSReltio URI: configuration/relationTypes/Activity/attributes/BusinessHoursMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBUSINESS_HOURS_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIDAYVARCHARconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/DayPERIODVARCHARconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/PeriodTIME_SLOTVARCHARconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/TimeSlotSTART_TIMEVARCHARconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/StartTimeEND_TIMEVARCHARconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/EndTimeAPPOINTMENT_ONLYBOOLEANconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/AppointmentOnlyPERIOD_STARTVARCHARconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/PeriodStartPERIOD_ENDVARCHARconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/PeriodEndPERIOD_OF_DAYVARCHARconfiguration/relationTypes/Activity/attributes/BusinessHours/attributes/PeriodOfDayACTIVITY_AFFILIATION_ROLEReltio URI: configuration/relationTypes/Activity/attributes/AffiliationRoleMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameAFFILIATION_ROLE_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIROLE_RANKVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleRankROLE_NAMEVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleNameLKUP_IMS_ROLEROLE_ATTRIBUTEVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleAttributeROLE_TYPE_ATTRIBUTEVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleTypeAttributeROLE_STATUSVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/RoleStatusBEST_ROLE_INDICATORVARCHARconfiguration/relationTypes/Activity/attributes/AffiliationRole/attributes/BestRoleIndicatorACTIVITY_EMAILReltio URI: configuration/relationTypes/Activity/attributes/ActEmailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACT_EMAIL_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URITYPE_IMSVARCHARconfiguration/relationTypes/Activity/attributes/ActEmail/attributes/TypeIMSLKUP_IMS_COMMUNICATION_TYPEEMAILVARCHARconfiguration/relationTypes/Activity/attributes/ActEmail/attributes/EmailDOMAINVARCHARconfiguration/relationTypes/Activity/attributes/ActEmail/attributes/DomainDOMAIN_TYPEVARCHARconfiguration/relationTypes/Activity/attributes/ActEmail/attributes/DomainTypeUSERNAMEVARCHARconfiguration/relationTypes/Activity/attributes/ActEmail/attributes/UsernameRANKVARCHARconfiguration/relationTypes/Activity/attributes/ActEmail/attributes/RankVALIDATION_STATUSVARCHARconfiguration/relationTypes/Activity/attributes/ActEmail/attributes/ValidationStatusACTIVEBOOLEANconfiguration/relationTypes/Activity/attributes/ActEmail/attributes/ActiveACTIVITY_BRICKReltio URI: configuration/relationTypes/Activity/attributes/BrickMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBRICK_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URITYPEVARCHARconfiguration/relationTypes/Activity/attributes/Brick/attributes/TypeLKUP_IMS_BRICK_TYPEBRICK_VALUEVARCHARconfiguration/relationTypes/Activity/attributes/Brick/attributes/BrickValueLKUP_IMS_BRICK_VALUESORT_ORDERVARCHARconfiguration/relationTypes/Activity/attributes/Brick/attributes/SortOrderACTIVITY_CLASSIFICATIONReltio URI: configuration/relationTypes/Activity/attributes/ClassificationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCLASSIFICATION_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URICLASSIFICATION_TYPEVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/ClassificationTypeLKUP_IMS_CLASSIFICATION_TYPECLASSIFICATION_INDICATORVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/ClassificationIndicatorLKUP_IMS_CLASSIFICATION_INDICATORCLASSIFICATION_VALUEVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/ClassificationValueCLASSIFICATION_VALUE_NUMERIC_QUANTITYVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/ClassificationValueNumericQuantitySTATUSVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/StatusLKUP_IMS_CLASSIFICATION_STATUSEFFECTIVE_DATEDATEconfiguration/relationTypes/Activity/attributes/Classification/attributes/EffectiveDateEND_DATEDATEconfiguration/relationTypes/Activity/attributes/Classification/attributes/EndDateNOTESVARCHARconfiguration/relationTypes/Activity/attributes/Classification/attributes/NotesACTIVITY_SOURCE_DATAReltio URI: configuration/relationTypes/Activity/attributes/SourceDataMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSOURCE_DATA_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIDATASET_IDENTIFIERVARCHARconfiguration/relationTypes/Activity/attributes/SourceData/attributes/DatasetIdentifierSTART_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/Activity/attributes/SourceData/attributes/StartObjectDatasetPartyIdentifierEND_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/Activity/attributes/SourceData/attributes/EndObjectDatasetPartyIdentifierRANKVARCHARconfiguration/relationTypes/Activity/attributes/SourceData/attributes/RankMANAGED_CLASSIFICATIONReltio URI: configuration/relationTypes/Managed/attributes/ClassificationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCLASSIFICATION_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URICLASSIFICATION_TYPEVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/ClassificationTypeLKUP_IMS_CLASSIFICATION_TYPECLASSIFICATION_INDICATORVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/ClassificationIndicatorLKUP_IMS_CLASSIFICATION_INDICATORCLASSIFICATION_VALUEVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/ClassificationValueCLASSIFICATION_VALUE_NUMERIC_QUANTITYVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/ClassificationValueNumericQuantitySTATUSVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/StatusLKUP_IMS_CLASSIFICATION_STATUSEFFECTIVE_DATEDATEconfiguration/relationTypes/Managed/attributes/Classification/attributes/EffectiveDateEND_DATEDATEconfiguration/relationTypes/Managed/attributes/Classification/attributes/EndDateNOTESVARCHARconfiguration/relationTypes/Managed/attributes/Classification/attributes/NotesMANAGED_SOURCE_DATAReltio URI: configuration/relationTypes/Managed/attributes/SourceDataMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSOURCE_DATA_URIVARCHARgenerated key descriptionRELATION_URIVARCHARReltio Relation URIDATASET_IDENTIFIERVARCHARconfiguration/relationTypes/Managed/attributes/SourceData/attributes/DatasetIdentifierSTART_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/Managed/attributes/SourceData/attributes/StartObjectDatasetPartyIdentifierEND_OBJECT_DATASET_PARTY_IDENTIFIERVARCHARconfiguration/relationTypes/Managed/attributes/SourceData/attributes/EndObjectDatasetPartyIdentifierRANKVARCHARconfiguration/relationTypes/Managed/attributes/SourceData/attributes/Rank"
},
{
"title": "Dynamic views for COMPANY MDM Model",
"pageID": "163917858",
"pageLink": "/display/GMDM/Dynamic+views+for+COMPANY+MDM+Model",
"content": "HCPHealth care providerReltio URI: configuration/entityTypes/HCPMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCOUNTRY_HCPVARCHARCountryconfiguration/entityTypes/HCP/attributes/CountryCOMPANY_CUST_IDVARCHARAn auto-generated unique COMPANY id assigned to an HCPconfiguration/entityTypes/HCP/attributes/COMPANYCustIDPREFIXVARCHARPrefix added before the name, e.g., Mr, Ms, Drconfiguration/entityTypes/HCP/attributes/PrefixHCPPrefixNAMEVARCHARNameconfiguration/entityTypes/HCP/attributes/NameFIRST_NAMEVARCHARFirst Nameconfiguration/entityTypes/HCP/attributes/FirstNameLAST_NAMEVARCHARLast Nameconfiguration/entityTypes/HCP/attributes/LastNameMIDDLE_NAMEVARCHARMiddle Nameconfiguration/entityTypes/HCP/attributes/MiddleNameCLEANSED_MIDDLE_NAMEVARCHARMiddle Nameconfiguration/entityTypes/HCP/attributes/CleansedMiddleNameSTATUSVARCHARStatus, e.g., Active or Inactiveconfiguration/entityTypes/HCP/attributes/StatusHCPStatusSTATUS_DETAILVARCHARDeactivation reasonconfiguration/entityTypes/HCP/attributes/StatusDetailHCPStatusDetailDEACTIVATION_CODEVARCHARDeactivation reasonconfiguration/entityTypes/HCP/attributes/DeactivationCodeHCPDeactivationReasonCodeSUFFIX_NAMEVARCHARGeneration Suffixconfiguration/entityTypes/HCP/attributes/SuffixNameSuffixNameGENDERVARCHARGenderconfiguration/entityTypes/HCP/attributes/GenderGenderNICKNAMEVARCHARNicknameconfiguration/entityTypes/HCP/attributes/NicknamePREFERRED_NAMEVARCHARPreferred Nameconfiguration/entityTypes/HCP/attributes/PreferredNameFORMATTED_NAMEVARCHARFormatted Nameconfiguration/entityTypes/HCP/attributes/FormattedNameTYPE_CODEVARCHARHCP Type Codeconfiguration/entityTypes/HCP/attributes/TypeCodeHCPTypeSUB_TYPE_CODEVARCHARHCP SubType Codeconfiguration/entityTypes/HCP/attributes/SubTypeCodeHCPSubTypeCodeIS_COMPANY_APPROVED_SPEAKERBOOLEANIs COMPANY Approved Speakerconfiguration/entityTypes/HCP/attributes/IsCOMPANYApprovedSpeakerSPEAKER_LAST_BRIEFING_DATEDATELast Briefing Dateconfiguration/entityTypes/HCP/attributes/SpeakerLastBriefingDateSPEAKER_TYPEVARCHARSpeaker typeconfiguration/entityTypes/HCP/attributes/SpeakerTypeSPEAKER_STATUSVARCHARSpeaker Statusconfiguration/entityTypes/HCP/attributes/SpeakerStatusHCPSpeakerStatusSPEAKER_LEVELVARCHARSpeaker Statusconfiguration/entityTypes/HCP/attributes/SpeakerLevelSPEAKER_EFFECTIVE_DATEDATESpeaker Effective Dateconfiguration/entityTypes/HCP/attributes/SpeakerEffectiveDateSPEAKER_DEACTIVATE_REASONVARCHARSpeaker Effective Dateconfiguration/entityTypes/HCP/attributes/SpeakerDeactivateReasonDELETION_DATEDATEDeletion Dataconfiguration/entityTypes/HCP/attributes/DeletionDateACCOUNT_BLOCKEDBOOLEANIndicator of account blocked or notconfiguration/entityTypes/HCP/attributes/AccountBlockedY_O_BVARCHARBirth Yearconfiguration/entityTypes/HCP/attributes/YoBD_O_DDATEconfiguration/entityTypes/HCP/attributes/DoDY_O_DVARCHARconfiguration/entityTypes/HCP/attributes/YoDTERRITORY_NUMBERVARCHARTitle of HCPconfiguration/entityTypes/HCP/attributes/TerritoryNumberWEBSITE_URLVARCHARWebsite URLconfiguration/entityTypes/HCP/attributes/WebsiteURLTITLEVARCHARTitle of HCPconfiguration/entityTypes/HCP/attributes/TitleHCPTitleEFFECTIVE_END_DATEDATEconfiguration/entityTypes/HCP/attributes/EffectiveEndDateCOMPANY_WATCH_INDBOOLEANCOMPANY Watch Indconfiguration/entityTypes/HCP/attributes/COMPANYWatchIndKOL_STATUSBOOLEANKOL Statusconfiguration/entityTypes/HCP/attributes/KOLStatusTHIRD_PARTY_DECILVARCHARThird Party Decilconfiguration/entityTypes/HCP/attributes/ThirdPartyDecilFEDERAL_EMP_LETTER_DATEDATEFederal Emp Letter Dateconfiguration/entityTypes/HCP/attributes/FederalEmpLetterDateMARKETING_CONTRACT_CODEVARCHARMarketing Contract Codeconfiguration/entityTypes/HCP/attributes/MarketingContractCodeCURRICULUM_VITAE_LINKVARCHARCurriculum Vitae Linkconfiguration/entityTypes/HCP/attributes/CurriculumVitaeLinkSPEAKER_TRAVEL_INDICATORVARCHARSpeaker Travel Indicatorconfiguration/entityTypes/HCP/attributes/SpeakerTravelIndicatorSPEAKER_INFOVARCHARSpeaker Informationconfiguration/entityTypes/HCP/attributes/SpeakerInfoDEGREEVARCHARDegree Informationconfiguration/entityTypes/HCP/attributes/DegreePRESENT_EMPLOYMENTVARCHARPresent Employmentconfiguration/entityTypes/HCP/attributes/PresentEmploymentPE_CDEMPLOYMENT_TYPE_CODEVARCHAREmployment Type Codeconfiguration/entityTypes/HCP/attributes/EmploymentTypeCodeEMPLOYMENT_TYPE_DESCVARCHAREmployment Type Descriptionconfiguration/entityTypes/HCP/attributes/EmploymentTypeDescTYPE_OF_PRACTICEVARCHARType Of Practiceconfiguration/entityTypes/HCP/attributes/TypeOfPracticeTOP_CDTYPE_OF_PRACTICE_DESCVARCHARType Of Practice Descriptionconfiguration/entityTypes/HCP/attributes/TypeOfPracticeDescSCHOOL_SEQ_NUMBERVARCHARSchool Sequence Numberconfiguration/entityTypes/HCP/attributes/SchoolSeqNumberMRM_DELETE_FLAGBOOLEANMRM Delete Flagconfiguration/entityTypes/HCP/attributes/MRMDeleteFlagMRM_DELETE_DATEDATEMRM Delete Dateconfiguration/entityTypes/HCP/attributes/MRMDeleteDateCNCY_DATEDATECNCY Dateconfiguration/entityTypes/HCP/attributes/CNCYDateAMA_HOSPITALVARCHARAMA Hospital Infoconfiguration/entityTypes/HCP/attributes/AMAHospitalAMA_HOSPITAL_DESCVARCHARAMA Hospital Descconfiguration/entityTypes/HCP/attributes/AMAHospitalDescPRACTISE_AT_HOSPITALVARCHARPractise At Hospitalconfiguration/entityTypes/HCP/attributes/PractiseAtHospitalSEGMENT_IDVARCHARSegment IDconfiguration/entityTypes/HCP/attributes/SegmentIDSEGMENT_DESCVARCHARSegment Descconfiguration/entityTypes/HCP/attributes/SegmentDescDCR_STATUSVARCHARStatus of HCP profileconfiguration/entityTypes/HCP/attributes/DCRStatusDCRStatusPREFERRED_LANGUAGEVARCHARLanguage preferenceconfiguration/entityTypes/HCP/attributes/PreferredLanguageSOURCE_TYPEVARCHARType of the sourceconfiguration/entityTypes/HCP/attributes/SourceTypeSTATE_UPDATE_DATEDATEUpdate date of stateconfiguration/entityTypes/HCP/attributes/StateUpdateDateSOURCE_UPDATE_DATEDATEUpdate date at sourceconfiguration/entityTypes/HCP/attributes/SourceUpdateDateCOMMENTERSVARCHARCommentersconfiguration/entityTypes/HCP/attributes/CommentersIMAGE_GALLERYVARCHARconfiguration/entityTypes/HCP/attributes/ImageGalleryBIRTH_CITYVARCHARBirth Cityconfiguration/entityTypes/HCP/attributes/BirthCityBIRTH_STATEVARCHARBirth Stateconfiguration/entityTypes/HCP/attributes/BirthStateStateBIRTH_COUNTRYVARCHARBirth Countryconfiguration/entityTypes/HCP/attributes/BirthCountryCountryD_O_BDATEDate of Birthconfiguration/entityTypes/HCP/attributes/DoBORIGINAL_SOURCE_NAMEVARCHAROriginal Source Nameconfiguration/entityTypes/HCP/attributes/OriginalSourceNameSOURCE_MATCH_CATEGORYVARCHARSource Match Categoryconfiguration/entityTypes/HCP/attributes/SourceMatchCategoryALTERNATE_NAMEReltio URI: configuration/entityTypes/HCP/attributes/AlternateNameMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameALTERNATE_NAME_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNAME_TYPE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/NameTypeCodeHCPAlternateNameTypeFULL_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/FullNameFIRST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/FirstNameMIDDLE_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/MiddleNameLAST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/LastNameVERSIONVARCHARconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/VersionADDRESSESReltio URI: configuration/entityTypes/HCP/attributes/Addresses, configuration/entityTypes/HCO/attributes/Addresses, configuration/entityTypes/MCO/attributes/AddressesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESSES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeADDRESS_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressType, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressType, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressTypeAddressTypeCOMPANY_ADDRESS_IDVARCHARCOMPANY Address IDconfiguration/entityTypes/HCP/attributes/Addresses/attributes/COMPANYAddressID, configuration/entityTypes/HCO/attributes/Addresses/attributes/COMPANYAddressID, configuration/entityTypes/MCO/attributes/Addresses/attributes/COMPANYAddressIDADDRESS_LINE1VARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine1, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressLine1, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressLine1ADDRESS_LINE2VARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine2, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressLine2, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressLine2ADDRESS_LINE3VARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine3, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressLine3, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressLine3ADDRESS_LINE4VARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine4, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressLine4, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressLine4CITYVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/City, configuration/entityTypes/HCO/attributes/Addresses/attributes/City, configuration/entityTypes/MCO/attributes/Addresses/attributes/CitySTATE_PROVINCEVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/StateProvince, configuration/entityTypes/HCO/attributes/Addresses/attributes/StateProvince, configuration/entityTypes/MCO/attributes/Addresses/attributes/StateProvinceStateCOUNTRY_ADDRESSESVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Country, configuration/entityTypes/HCO/attributes/Addresses/attributes/Country, configuration/entityTypes/MCO/attributes/Addresses/attributes/CountryCountryPO_BOXVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/POBox, configuration/entityTypes/HCO/attributes/Addresses/attributes/POBox, configuration/entityTypes/MCO/attributes/Addresses/attributes/POBoxZIP5VARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Zip5, configuration/entityTypes/HCO/attributes/Addresses/attributes/Zip5, configuration/entityTypes/MCO/attributes/Addresses/attributes/Zip5ZIP4VARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Zip4, configuration/entityTypes/HCO/attributes/Addresses/attributes/Zip4, configuration/entityTypes/MCO/attributes/Addresses/attributes/Zip4STREETVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Street, configuration/entityTypes/HCO/attributes/Addresses/attributes/Street, configuration/entityTypes/MCO/attributes/Addresses/attributes/StreetPOSTAL_CODE_EXTENSIONVARCHARPostal Code Extensionconfiguration/entityTypes/HCP/attributes/Addresses/attributes/PostalCodeExtension, configuration/entityTypes/HCO/attributes/Addresses/attributes/PostalCodeExtension, configuration/entityTypes/MCO/attributes/Addresses/attributes/PostalCodeExtensionADDRESS_USAGE_TAGVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressUsageTag, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressUsageTagAddressUsageTagCNCY_DATEDATECNCY Dateconfiguration/entityTypes/HCP/attributes/Addresses/attributes/CNCYDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/CNCYDateCBSA_CODEVARCHARCore Based Statistical Areaconfiguration/entityTypes/HCP/attributes/Addresses/attributes/CBSACode, configuration/entityTypes/HCO/attributes/Addresses/attributes/CBSACode, configuration/entityTypes/MCO/attributes/Addresses/attributes/CBSACodePREMISEVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Premise, configuration/entityTypes/HCO/attributes/Addresses/attributes/PremiseISO3166-2VARCHARThis field holds the ISO 3166 2-character country code.configuration/entityTypes/HCP/attributes/Addresses/attributes/ISO3166-2, configuration/entityTypes/HCO/attributes/Addresses/attributes/ISO3166-2, configuration/entityTypes/MCO/attributes/Addresses/attributes/ISO3166-2ISO3166-3VARCHARThis field holds the ISO 3166 3-character country code.configuration/entityTypes/HCP/attributes/Addresses/attributes/ISO3166-3, configuration/entityTypes/HCO/attributes/Addresses/attributes/ISO3166-3, configuration/entityTypes/MCO/attributes/Addresses/attributes/ISO3166-3ISO3166-NVARCHARThis field holds the ISO 3166 N-digit numeric country code.configuration/entityTypes/HCP/attributes/Addresses/attributes/ISO3166-N, configuration/entityTypes/HCO/attributes/Addresses/attributes/ISO3166-N, configuration/entityTypes/MCO/attributes/Addresses/attributes/ISO3166-NLATITUDEVARCHARLatitudeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Latitude, configuration/entityTypes/HCO/attributes/Addresses/attributes/Latitude, configuration/entityTypes/MCO/attributes/Addresses/attributes/LatitudeLONGITUDEVARCHARLongitudeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Longitude, configuration/entityTypes/HCO/attributes/Addresses/attributes/Longitude, configuration/entityTypes/MCO/attributes/Addresses/attributes/LongitudeGEO_ACCURACYVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/GeoAccuracy, configuration/entityTypes/HCO/attributes/Addresses/attributes/GeoAccuracy, configuration/entityTypes/MCO/attributes/Addresses/attributes/GeoAccuracyVERIFICATION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatus, configuration/entityTypes/HCO/attributes/Addresses/attributes/VerificationStatus, configuration/entityTypes/MCO/attributes/Addresses/attributes/VerificationStatusVERIFICATION_STATUS_DETAILSVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatusDetails, configuration/entityTypes/HCO/attributes/Addresses/attributes/VerificationStatusDetails, configuration/entityTypes/MCO/attributes/Addresses/attributes/VerificationStatusDetailsAVCVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AVC, configuration/entityTypes/HCO/attributes/Addresses/attributes/AVC, configuration/entityTypes/MCO/attributes/Addresses/attributes/AVCSETTING_TYPEVARCHARSetting Typeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/SettingType, configuration/entityTypes/HCO/attributes/Addresses/attributes/SettingTypeADDRESS_SETTING_TYPE_DESCVARCHARAddress Setting Type Descconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressSettingTypeDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressSettingTypeDescCATEGORYVARCHARCategoryconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Category, configuration/entityTypes/HCO/attributes/Addresses/attributes/CategoryAddressCategoryFIPS_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/FIPSCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSCodeFIPS_COUNTY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/FIPSCountyCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSCountyCodeFIPS_COUNTY_CODE_DESCVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/FIPSCountyCodeDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSCountyCodeDescFIPS_STATE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/FIPSStateCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSStateCodeFIPS_STATE_CODE_DESCVARCHARconfiguration/entityTypes/HCP/attributes/Addresses/attributes/FIPSStateCodeDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/FIPSStateCodeDescCARE_OFVARCHARCare Ofconfiguration/entityTypes/HCP/attributes/Addresses/attributes/CareOf, configuration/entityTypes/HCO/attributes/Addresses/attributes/CareOfMAIN_PHYSICAL_OFFICEVARCHARMain Physical Officeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/MainPhysicalOffice, configuration/entityTypes/HCO/attributes/Addresses/attributes/MainPhysicalOfficeDELIVERABILITY_CONFIDENCEVARCHARDeliverability Confidenceconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DeliverabilityConfidence, configuration/entityTypes/HCO/attributes/Addresses/attributes/DeliverabilityConfidenceAPPLIDVARCHARAPPLIDconfiguration/entityTypes/HCP/attributes/Addresses/attributes/APPLID, configuration/entityTypes/HCO/attributes/Addresses/attributes/APPLIDSMPLDLV_INDBOOLEANSMPLDLV Indconfiguration/entityTypes/HCP/attributes/Addresses/attributes/SMPLDLVInd, configuration/entityTypes/HCO/attributes/Addresses/attributes/SMPLDLVIndSTATUSVARCHARStatusconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Status, configuration/entityTypes/HCO/attributes/Addresses/attributes/StatusAddressStatusSTARTER_ELIGIBLE_FLAGVARCHARStarterEligibleFlagconfiguration/entityTypes/HCP/attributes/Addresses/attributes/StarterEligibleFlag, configuration/entityTypes/HCO/attributes/Addresses/attributes/StarterEligibleFlagDEA_FLAGBOOLEANDEA Flagconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEAFlag, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEAFlagUSAGE_TYPEVARCHARUsage Typeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/UsageType, configuration/entityTypes/HCO/attributes/Addresses/attributes/UsageTypePRIMARYBOOLEANPrimary Addressconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Primary, configuration/entityTypes/HCO/attributes/Addresses/attributes/PrimaryEFFECTIVE_START_DATEDATEEffective Start Dateconfiguration/entityTypes/HCP/attributes/Addresses/attributes/EffectiveStartDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/EffectiveStartDateEFFECTIVE_END_DATEDATEEffective End Dateconfiguration/entityTypes/HCP/attributes/Addresses/attributes/EffectiveEndDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/EffectiveEndDateADDRESS_RANKVARCHARAddress Rank for priorityconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressRank, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressRank, configuration/entityTypes/MCO/attributes/Addresses/attributes/AddressRankSOURCE_SEGMENT_CODEVARCHARSource Segment Codeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/SourceSegmentCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/SourceSegmentCodeSEGMENT1VARCHARSegment1configuration/entityTypes/HCP/attributes/Addresses/attributes/Segment1, configuration/entityTypes/HCO/attributes/Addresses/attributes/Segment1SEGMENT2VARCHARSegment2configuration/entityTypes/HCP/attributes/Addresses/attributes/Segment2, configuration/entityTypes/HCO/attributes/Addresses/attributes/Segment2SEGMENT3VARCHARSegment3configuration/entityTypes/HCP/attributes/Addresses/attributes/Segment3, configuration/entityTypes/HCO/attributes/Addresses/attributes/Segment3ADDRESS_INDBOOLEANAddressIndconfiguration/entityTypes/HCP/attributes/Addresses/attributes/AddressInd, configuration/entityTypes/HCO/attributes/Addresses/attributes/AddressIndSCRIPT_UTILIZATION_WEIGHTVARCHARScript Utilization Weightconfiguration/entityTypes/HCP/attributes/Addresses/attributes/ScriptUtilizationWeight, configuration/entityTypes/HCO/attributes/Addresses/attributes/ScriptUtilizationWeightBUSINESS_ACTIVITY_CODEVARCHARBusiness Activity Codeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/BusinessActivityCode, configuration/entityTypes/HCO/attributes/Addresses/attributes/BusinessActivityCodeBUSINESS_ACTIVITY_DESCVARCHARBusiness Activity Descconfiguration/entityTypes/HCP/attributes/Addresses/attributes/BusinessActivityDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/BusinessActivityDescPRACTICE_LOCATION_RANKVARCHARPractice Location Rankconfiguration/entityTypes/HCP/attributes/Addresses/attributes/PracticeLocationRank, configuration/entityTypes/HCO/attributes/Addresses/attributes/PracticeLocationRankPracticeLocationRankPRACTICE_LOCATION_CONFIDENCE_INDVARCHARPractice Location Confidence Indconfiguration/entityTypes/HCP/attributes/Addresses/attributes/PracticeLocationConfidenceInd, configuration/entityTypes/HCO/attributes/Addresses/attributes/PracticeLocationConfidenceIndPRACTICE_LOCATION_CONFIDENCE_DESCVARCHARPractice Location Confidence Descconfiguration/entityTypes/HCP/attributes/Addresses/attributes/PracticeLocationConfidenceDesc, configuration/entityTypes/HCO/attributes/Addresses/attributes/PracticeLocationConfidenceDescSINGLE_ADDRESS_INDBOOLEANSingle Address Indconfiguration/entityTypes/HCP/attributes/Addresses/attributes/SingleAddressInd, configuration/entityTypes/HCO/attributes/Addresses/attributes/SingleAddressIndSUB_ADMINISTRATIVE_AREAVARCHARThis field holds the smallest geographic data element within a country. For instance, USA County.configuration/entityTypes/HCP/attributes/Addresses/attributes/SubAdministrativeArea, configuration/entityTypes/HCO/attributes/Addresses/attributes/SubAdministrativeArea, configuration/entityTypes/MCO/attributes/Addresses/attributes/SubAdministrativeAreaSUPER_ADMINISTRATIVE_AREAVARCHARThis field holds the largest geographic data element within a country.configuration/entityTypes/HCO/attributes/Addresses/attributes/SuperAdministrativeAreaADMINISTRATIVE_AREAVARCHARThis field holds the most common geographic data element within a country. For instance, USA State, and Canadian Province.configuration/entityTypes/HCO/attributes/Addresses/attributes/AdministrativeAreaUNIT_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/Addresses/attributes/UnitNameUNIT_VALUEVARCHARconfiguration/entityTypes/HCO/attributes/Addresses/attributes/UnitValueFLOORVARCHARN/Aconfiguration/entityTypes/HCO/attributes/Addresses/attributes/FloorBUILDINGVARCHARN/Aconfiguration/entityTypes/HCO/attributes/Addresses/attributes/BuildingSUB_BUILDINGVARCHARconfiguration/entityTypes/HCO/attributes/Addresses/attributes/SubBuildingNEIGHBORHOODVARCHARconfiguration/entityTypes/HCO/attributes/Addresses/attributes/NeighborhoodPREMISE_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/Addresses/attributes/PremiseNumberADDRESSES_SOURCESourceReltio URI: configuration/entityTypes/HCP/attributes/Addresses/attributes/Source, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source, configuration/entityTypes/MCO/attributes/Addresses/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESSES_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceName, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source/attributes/SourceName, configuration/entityTypes/MCO/attributes/Addresses/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceRank, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source/attributes/SourceRank, configuration/entityTypes/MCO/attributes/Addresses/attributes/Source/attributes/SourceRankSOURCE_ADDRESS_IDVARCHARSource Address IDconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/SourceAddressID, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source/attributes/SourceAddressID, configuration/entityTypes/MCO/attributes/Addresses/attributes/Source/attributes/SourceAddressIDLEGACY_IQVIA_ADDRESS_IDVARCHARLegacy address idconfiguration/entityTypes/HCP/attributes/Addresses/attributes/Source/attributes/LegacyIQVIAAddressID, configuration/entityTypes/HCO/attributes/Addresses/attributes/Source/attributes/LegacyIQVIAAddressIDADDRESSES_DEADEAReltio URI: configuration/entityTypes/HCP/attributes/Addresses/attributes/DEA, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEAMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESSES_URIVARCHARGenerated KeyDEA_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNUMBERVARCHARNumberconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/Number, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/NumberEXPIRATION_DATEDATEExpiration Dateconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/ExpirationDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/ExpirationDateSTATUSVARCHARStatusconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/Status, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/StatusAddressDEAStatusSTATUSVARCHARStatusconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/Status, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/StatusSTATUS_DETAILVARCHARDeactivation Reason Codeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/StatusDetail, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/StatusDetailHCPDEAStatusDetailSTATUS_DETAILVARCHARDeactivation Reason Codeconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/StatusDetail, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/StatusDetailDRUG_SCHEDULEVARCHARDrug Scheduleconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/DrugSchedule, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/DrugScheduleDRUG_SCHEDULEVARCHARDrug Scheduleconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/DrugSchedule, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/DrugScheduleApp-LSCustomer360DEADrugScheduleEFFECTIVE_DATEDATEEffective Dateconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/EffectiveDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/EffectiveDateSTATUS_DATEDATEStatus Dateconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/StatusDate, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/StatusDateDEA_BUSINESS_ACTIVITYVARCHARBusiness Activityconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/DEABusinessActivity, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/DEABusinessActivityDEABusinessActivityDEA_BUSINESS_ACTIVITYVARCHARBusiness Activityconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/DEABusinessActivity, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/DEABusinessActivitySUB_BUSINESS_ACTIVITYVARCHARSub Business Activityconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivity, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivityDEABusinessSubActivitySUB_BUSINESS_ACTIVITYVARCHARSub Business Activityconfiguration/entityTypes/HCP/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivity, configuration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivityBUSINESS_ACTIVITY_DESCVARCHARBusiness Activity Descconfiguration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/BusinessActivityDescSUB_BUSINESS_ACTIVITY_DESCVARCHARSub Business Activity Descconfiguration/entityTypes/HCO/attributes/Addresses/attributes/DEA/attributes/SubBusinessActivityDescADDRESSES_OFFICE_INFORMATIONReltio URI: configuration/entityTypes/HCP/attributes/Addresses/attributes/OfficeInformation, configuration/entityTypes/HCO/attributes/Addresses/attributes/OfficeInformationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESSES_URIVARCHARGenerated KeyOFFICE_INFORMATION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeBEST_TIMESVARCHARBest Timesconfiguration/entityTypes/HCP/attributes/Addresses/attributes/OfficeInformation/attributes/BestTimes, configuration/entityTypes/HCO/attributes/Addresses/attributes/OfficeInformation/attributes/BestTimesAPPT_REQUIREDBOOLEANAppointment Required or notconfiguration/entityTypes/HCP/attributes/Addresses/attributes/OfficeInformation/attributes/ApptRequired, configuration/entityTypes/HCO/attributes/Addresses/attributes/OfficeInformation/attributes/ApptRequiredOFFICE_NOTESVARCHAROffice Notesconfiguration/entityTypes/HCP/attributes/Addresses/attributes/OfficeInformation/attributes/OfficeNotes, configuration/entityTypes/HCO/attributes/Addresses/attributes/OfficeInformation/attributes/OfficeNotesCOMPLIANCEComplianceReltio URI: configuration/entityTypes/HCP/attributes/ComplianceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCOMPLIANCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeGO_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/GOStatusHCPComplianceGOStatusPIGO_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/PIGOStatusHCPPIGOStatusNIPPIGO_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/NIPPIGOStatusHCPNIPPIGOStatusPRIMARY_PIGO_RATIONALEVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/PrimaryPIGORationaleHCPPIGORationaleSECONDARY_PIGO_RATIONALEVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/SecondaryPIGORationaleHCPPIGORationalePIGOSME_REVIEWVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/PIGOSMEReviewHCPPIGOSMEReviewGSQ_DATEDATEconfiguration/entityTypes/HCP/attributes/Compliance/attributes/GSQDateDO_NOT_USEBOOLEANconfiguration/entityTypes/HCP/attributes/Compliance/attributes/DoNotUseCHANGE_DATEDATEconfiguration/entityTypes/HCP/attributes/Compliance/attributes/ChangeDateCHANGE_REASONVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/ChangeReasonMAPPHCP_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/MAPPHCPStatusMAPP_MAILVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/MAPPMailDISCLOSUREDisclosureReltio URI: configuration/entityTypes/HCP/attributes/DisclosureMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDISCLOSURE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeBENEFIT_CATEGORYVARCHARBenefit Categoryconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/BenefitCategoryHCPBenefitCategoryBENEFIT_TITLEVARCHARBenefit Titleconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/BenefitTitleHCPBenefitTitleBENEFIT_QUALITYVARCHARBenefit Qualityconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/BenefitQualityHCPBenefitQualityBENEFIT_SPECIALTYVARCHARBenefit Specialtyconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/BenefitSpecialtyHCPBenefitSpecialtyCONTRACT_CLASSIFICATIONVARCHARContract Classificationconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/ContractClassificationCONTRACT_CLASSIFICATION_DATEDATEContract Classification Dateconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/ContractClassificationDateMILITARYBOOLEANMilitaryconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/MilitaryCIVIL_SERVANTBOOLEANCivil Servantconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/CivilServantCREDENTIALCredential InformationReltio URI: configuration/entityTypes/HCP/attributes/CredentialMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCREDENTIAL_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCREDENTIALVARCHARconfiguration/entityTypes/HCP/attributes/Credential/attributes/CredentialCredentialOTHER_CDTL_TXTVARCHAROther Credential Textconfiguration/entityTypes/HCP/attributes/Credential/attributes/OtherCdtlTxtPRIMARY_FLAGBOOLEANPrimary Flagconfiguration/entityTypes/HCP/attributes/Credential/attributes/PrimaryFlagEFFECTIVE_END_DATEDATEEffective End Dateconfiguration/entityTypes/HCP/attributes/Credential/attributes/EffectiveEndDatePROFESSIONProfession InformationReltio URI: configuration/entityTypes/HCP/attributes/ProfessionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePROFESSION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePROFESSIONVARCHARconfiguration/entityTypes/HCP/attributes/Profession/attributes/ProfessionHCPSpecialtyProfessionPROFESSION_SOURCESourceReltio URI: configuration/entityTypes/HCP/attributes/Profession/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePROFESSION_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes/HCP/attributes/Profession/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes/HCP/attributes/Profession/attributes/Source/attributes/SourceRankSPECIALITIESReltio URI: configuration/entityTypes/HCP/attributes/Specialities, configuration/entityTypes/HCO/attributes/SpecialitiesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPECIALITIES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSPECIALTYVARCHARSpecialty of the entity, e.g., Adult Congenital Heart Diseaseconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialty, configuration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyHCPSpecialty,App-LSCustomer360SpecialtyPROFESSIONVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/ProfessionHCPSpecialtyProfessionPRIMARYBOOLEANWhether Primary Specialty or notconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Primary, configuration/entityTypes/HCO/attributes/Specialities/attributes/PrimaryRANKVARCHARRankconfiguration/entityTypes/HCP/attributes/Specialities/attributes/RankTRUST_INDICATORVARCHARconfiguration/entityTypes/HCP/attributes/Specialities/attributes/TrustIndicatorDESCVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Specialities/attributes/DescSPECIALTY_TYPEVARCHARType of Specialty, e.g. Secondaryconfiguration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyTypeApp-LSCustomer360SpecialtyTypeGROUPVARCHARGroup, Specialty belongs toconfiguration/entityTypes/HCO/attributes/Specialities/attributes/GroupSPECIALTY_DETAILVARCHARDescription of Specialtyconfiguration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyDetailSPECIALITIES_SOURCEReltio URI: configuration/entityTypes/HCP/attributes/Specialities/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPECIALITIES_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARRankconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Source/attributes/SourceRankSUB_SPECIALITIESReltio URI: configuration/entityTypes/HCP/attributes/SubSpecialitiesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSUB_SPECIALITIES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSPECIALTY_CODEVARCHARSub specialty code of the entityconfiguration/entityTypes/HCP/attributes/SubSpecialities/attributes/SpecialtyCodeSUB_SPECIALTYVARCHARSub specialty of the entityconfiguration/entityTypes/HCP/attributes/SubSpecialities/attributes/SubSpecialtyPROFESSION_CODEVARCHARProfession Codeconfiguration/entityTypes/HCP/attributes/SubSpecialities/attributes/ProfessionCodeSUB_SPECIALITIES_SOURCEReltio URI: configuration/entityTypes/HCP/attributes/SubSpecialities/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSUB_SPECIALITIES_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes/HCP/attributes/SubSpecialities/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARRankconfiguration/entityTypes/HCP/attributes/SubSpecialities/attributes/Source/attributes/SourceRankEDUCATIONReltio URI: configuration/entityTypes/HCP/attributes/EducationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEDUCATION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSCHOOL_CDVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/SchoolCDSCHOOL_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/SchoolNameYEAR_OF_GRADUATIONVARCHARDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Education/attributes/YearOfGraduationSTATEVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/StateCOUNTRY_EDUCATIONVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/CountryTYPEVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/TypeGPAVARCHARconfiguration/entityTypes/HCP/attributes/Education/attributes/GPAGRADUATEDBOOLEANDO NOT USE THIS ATTRIBUTE - will be deprecatedconfiguration/entityTypes/HCP/attributes/Education/attributes/GraduatedEMAILReltio URI: configuration/entityTypes/HCP/attributes/Email, configuration/entityTypes/HCO/attributes/Email, configuration/entityTypes/MCO/attributes/EmailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMAIL_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARType of Email, e.g., Homeconfiguration/entityTypes/HCP/attributes/Email/attributes/Type, configuration/entityTypes/HCO/attributes/Email/attributes/Type, configuration/entityTypes/MCO/attributes/Email/attributes/TypeEmailTypeEMAILVARCHAREmail addressconfiguration/entityTypes/HCP/attributes/Email/attributes/Email, configuration/entityTypes/HCO/attributes/Email/attributes/Email, configuration/entityTypes/MCO/attributes/Email/attributes/EmailRANKVARCHARRank used to assign priority to a Emailconfiguration/entityTypes/HCP/attributes/Email/attributes/Rank, configuration/entityTypes/HCO/attributes/Email/attributes/Rank, configuration/entityTypes/MCO/attributes/Email/attributes/RankEMAIL_USAGE_TAGVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/EmailUsageTag, configuration/entityTypes/HCO/attributes/Email/attributes/EmailUsageTag, configuration/entityTypes/MCO/attributes/Email/attributes/EmailUsageTagEmailUsageTagUSAGE_TYPEVARCHARUsage Type of an Emailconfiguration/entityTypes/HCP/attributes/Email/attributes/UsageType, configuration/entityTypes/HCO/attributes/Email/attributes/UsageType, configuration/entityTypes/MCO/attributes/Email/attributes/UsageTypeDOMAINVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/Domain, configuration/entityTypes/HCO/attributes/Email/attributes/Domain, configuration/entityTypes/MCO/attributes/Email/attributes/DomainVALIDATION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Email/attributes/ValidationStatus, configuration/entityTypes/HCO/attributes/Email/attributes/ValidationStatus, configuration/entityTypes/MCO/attributes/Email/attributes/ValidationStatusDOMAIN_TYPEVARCHARStatus of Emailconfiguration/entityTypes/HCO/attributes/Email/attributes/DomainType, configuration/entityTypes/MCO/attributes/Email/attributes/DomainTypeUSERNAMEVARCHARDomain on which Email is createdconfiguration/entityTypes/HCO/attributes/Email/attributes/Username, configuration/entityTypes/MCO/attributes/Email/attributes/UsernameEMAIL_SOURCESourceReltio URI: configuration/entityTypes/HCP/attributes/Email/attributes/Source, configuration/entityTypes/HCO/attributes/Email/attributes/Source, configuration/entityTypes/MCO/attributes/Email/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMAIL_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes/HCP/attributes/Email/attributes/Source/attributes/SourceName, configuration/entityTypes/HCO/attributes/Email/attributes/Source/attributes/SourceName, configuration/entityTypes/MCO/attributes/Email/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes/HCP/attributes/Email/attributes/Source/attributes/SourceRank, configuration/entityTypes/HCO/attributes/Email/attributes/Source/attributes/SourceRank, configuration/entityTypes/MCO/attributes/Email/attributes/Source/attributes/SourceRankIDENTIFIERSReltio URI: configuration/entityTypes/HCP/attributes/Identifiers, configuration/entityTypes/HCO/attributes/Identifiers, configuration/entityTypes/MCO/attributes/IdentifiersMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameIDENTIFIERS_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARIdentifier Typeconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Type, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Type, configuration/entityTypes/MCO/attributes/Identifiers/attributes/TypeHCPIdentifierType,HCOIdentifierTypeIDVARCHARIdentifier IDconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/ID, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ID, configuration/entityTypes/MCO/attributes/Identifiers/attributes/IDEXTL_DATEDATEExternal Dateconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/EXTLDateACTIVATION_DATEDATEActivation Dateconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/ActivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ActivationDateREFER_BACK_ID_STATUSVARCHARStatusconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/ReferBackIDStatus, configuration/entityTypes/HCO/attributes/Identifiers/attributes/ReferBackIDStatusDEACTIVATION_DATEDATEIdentifier Deactivation Dateconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/DeactivationDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/DeactivationDateSTATEVARCHARIdentifier Stateconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/StateStateSOURCE_NAMEVARCHARName of the Identifier sourceconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/SourceName, configuration/entityTypes/HCO/attributes/Identifiers/attributes/SourceName, configuration/entityTypes/MCO/attributes/Identifiers/attributes/SourceNameTRUSTVARCHARTrustconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Trust, configuration/entityTypes/HCO/attributes/Identifiers/attributes/Trust, configuration/entityTypes/MCO/attributes/Identifiers/attributes/TrustSOURCE_START_DATEDATEStart date at sourceconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/SourceStartDateSOURCE_UPDATE_DATEDATEUpdate date at sourceconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/SourceUpdateDate, configuration/entityTypes/HCO/attributes/Identifiers/attributes/SourceUpdateDateSTATUSVARCHARStatusconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Status, configuration/entityTypes/HCO/attributes/Identifiers/attributes/StatusHCPIdentifierStatus,HCOIdentifierStatusSTATUS_DETAILVARCHARIdentifier Deactivation Reason Codeconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/StatusDetail, configuration/entityTypes/HCO/attributes/Identifiers/attributes/StatusDetailHCPIdentifierStatusDetail,HCOIdentifierStatusDetailDRUG_SCHEDULEVARCHARStatusconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/DrugScheduleTAXONOMYVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/TaxonomySEQUENCE_NUMBERVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/SequenceNumberMCRPE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/MCRPECodeMCRPE_START_DATEDATEconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/MCRPEStartDateMCRPE_END_DATEDATEconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/MCRPEEndDateMCRPE_IS_OPTEDBOOLEANconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/MCRPEIsOptedEXPIRATION_DATEDATEconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/ExpirationDateORDERVARCHAROrderconfiguration/entityTypes/HCO/attributes/Identifiers/attributes/OrderREASONVARCHARReasonconfiguration/entityTypes/HCO/attributes/Identifiers/attributes/ReasonSTART_DATEDATEIdentifier Start Dateconfiguration/entityTypes/HCO/attributes/Identifiers/attributes/StartDateEND_DATEDATEIdentifier End Dateconfiguration/entityTypes/HCO/attributes/Identifiers/attributes/EndDateDATA_QUALITYReltio URI: configuration/entityTypes/HCP/attributes/DataQuality, configuration/entityTypes/HCO/attributes/DataQuality, configuration/entityTypes/MCO/attributes/DataQualityMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDATA_QUALITY_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDQ_DESCRIPTIONVARCHARDQ Descriptionconfiguration/entityTypes/HCP/attributes/DataQuality/attributes/DQDescription, configuration/entityTypes/HCO/attributes/DataQuality/attributes/DQDescription, configuration/entityTypes/MCO/attributes/DataQuality/attributes/DQDescriptionDQDescriptionLICENSEReltio URI: configuration/entityTypes/HCP/attributes/License, configuration/entityTypes/HCO/attributes/LicenseMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameLICENSE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCATEGORYVARCHARCategory License belongs to, e.g., Internationalconfiguration/entityTypes/HCP/attributes/License/attributes/CategoryPROFESSION_CODEVARCHARProfession Informationconfiguration/entityTypes/HCP/attributes/License/attributes/ProfessionCodeHCPProfessionNUMBERVARCHARState License INTEGER. A unique license INTEGER is listed for each license the physician holds. There is no standard format syntax. Format examples: 18986, 4301079019, BX1464089. There is also no limit to the INTEGER of licenses a physician can hold in a state. Example: A physician can have an inactive resident license plus unlimited active licenses. Residents can have as many as four licenses since some states issue licenses every yearconfiguration/entityTypes/HCP/attributes/License/attributes/Number, configuration/entityTypes/HCO/attributes/License/attributes/NumberREG_AUTH_IDVARCHARRegAuthIDconfiguration/entityTypes/HCP/attributes/License/attributes/RegAuthIDSTATE_BOARDVARCHARState Boardconfiguration/entityTypes/HCP/attributes/License/attributes/StateBoardSTATE_BOARD_NAMEVARCHARState Board Nameconfiguration/entityTypes/HCP/attributes/License/attributes/StateBoardNameSTATEVARCHARState License State. Two character field. USPS standard abbreviations.configuration/entityTypes/HCP/attributes/License/attributes/State, configuration/entityTypes/HCO/attributes/License/attributes/StateTYPEVARCHARState License Type. U = Unlimited there is no restriction on the physician to practice medicine; L = Limited implies restrictions of some sort. For example, the physician may practice only in a given county, admit patients only to particular hospitals, or practice under the supervision of a physician with a license in state or private hospitals or other settings; T = Temporary issued to a physician temporarily practicing in an underserved area outside his/her state of licensure. Also granted between board meetings when new licenses are issued. Time span for a temporary license varies from state to state. Temporary licenses typically expire 6-9 months from the date they are issued; R = Resident License granted to a physician in graduate medical education (e.g., residency training).configuration/entityTypes/HCP/attributes/License/attributes/TypeST_LIC_TYPESTATUSVARCHARState License Status. A = Active. Physician is licensed to practice within the state; I = Inactive. If the physician has not reregistered a state license OR if the license has been suspended or revoked by the state board; X = unknown. If the state has not provided current information Note: Some state boards issue inactive licenses to physicians who want to maintain licensure in the state although they are currently practicing in another state.configuration/entityTypes/HCP/attributes/License/attributes/StatusHCPLicenseStatusSTATUS_DETAILVARCHARDeactivation Reason Codeconfiguration/entityTypes/HCP/attributes/License/attributes/StatusDetailHCPLicenseStatusDetailTRUSTVARCHARTrust flagconfiguration/entityTypes/HCP/attributes/License/attributes/TrustDEACTIVATION_REASON_CODEVARCHARDeactivation Reason Codeconfiguration/entityTypes/HCP/attributes/License/attributes/DeactivationReasonCodeHCPLicenseDeactivationReasonCodeEXPIRATION_DATEDATELicense Expiration Dateconfiguration/entityTypes/HCP/attributes/License/attributes/ExpirationDateISSUE_DATEDATEState License Issue Dateconfiguration/entityTypes/HCP/attributes/License/attributes/IssueDateSTATE_LICENSE_PRIVILEGEVARCHARState License Privilegeconfiguration/entityTypes/HCP/attributes/License/attributes/StateLicensePrivilegeSTATE_LICENSE_PRIVILEGE_NAMEVARCHARState License Privilege Nameconfiguration/entityTypes/HCP/attributes/License/attributes/StateLicensePrivilegeNameSTATE_LICENSE_STATUS_DATEDATEState License Status Dateconfiguration/entityTypes/HCP/attributes/License/attributes/StateLicenseStatusDateRANKVARCHARRank of Licenseconfiguration/entityTypes/HCP/attributes/License/attributes/RankCERTIFICATION_CODEVARCHARCertification Codeconfiguration/entityTypes/HCP/attributes/License/attributes/CertificationCodeHCPLicenseCertificationLICENSE_SOURCESourceReltio URI: configuration/entityTypes/HCP/attributes/License/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameLICENSE_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes/HCP/attributes/License/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes/HCP/attributes/License/attributes/Source/attributes/SourceRankLICENSE_REGULATORYLicense RegulatoryReltio URI: configuration/entityTypes/HCP/attributes/License/attributes/RegulatoryMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameLICENSE_URIVARCHARGenerated KeyREGULATORY_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeREQ_SAMPL_NON_CTRLVARCHARReq Sampl Non Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/ReqSamplNonCtrlREQ_SAMPL_CTRLVARCHARReq Sampl Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/ReqSamplCtrlRECV_SAMPL_NON_CTRLVARCHARRecv Sampl Non Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/RecvSamplNonCtrlRECV_SAMPL_CTRLVARCHARRecv Sampl Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/RecvSamplCtrlDISTR_SAMPL_NON_CTRLVARCHARDistr Sampl Non Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/DistrSamplNonCtrlDISTR_SAMPL_CTRLVARCHARDistr Sampl Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/DistrSamplCtrlSAMP_DRUG_SCHED_I_FLAGVARCHARSamp Drug Sched I Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedIFlagSAMP_DRUG_SCHED_II_FLAGVARCHARSamp Drug Sched II Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedIIFlagSAMP_DRUG_SCHED_III_FLAGVARCHARSamp Drug Sched III Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedIIIFlagSAMP_DRUG_SCHED_IV_FLAGVARCHARSamp Drug Sched IV Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedIVFlagSAMP_DRUG_SCHED_V_FLAGVARCHARSamp Drug Sched V Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedVFlagSAMP_DRUG_SCHED_VI_FLAGVARCHARSamp Drug Sched VI Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SampDrugSchedVIFlagPRESCR_NON_CTRL_FLAGVARCHARPrescr Non Ctrl Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrNonCtrlFlagPRESCR_APP_REQ_NON_CTRL_FLAGVARCHARPrescr App Req Non Ctrl Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrAppReqNonCtrlFlagPRESCR_CTRL_FLAGVARCHARPrescr Ctrl Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrCtrlFlagPRESCR_APP_REQ_CTRL_FLAGVARCHARPrescr App Req Ctrl Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrAppReqCtrlFlagPRESCR_DRUG_SCHED_I_FLAGVARCHARPrescr Drug Sched I Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIFlagPRESCR_DRUG_SCHED_II_FLAGVARCHARPrescr Drug Sched II Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIIFlagPRESCR_DRUG_SCHED_III_FLAGVARCHARPrescr Drug Sched III Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIIIFlagPRESCR_DRUG_SCHED_IV_FLAGVARCHARPrescr Drug Sched IV Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedIVFlagPRESCR_DRUG_SCHED_V_FLAGVARCHARPrescr Drug Sched V Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedVFlagPRESCR_DRUG_SCHED_VI_FLAGVARCHARPrescr Drug Sched VI Flagconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/PrescrDrugSchedVIFlagSUPERVISORY_REL_CD_NON_CTRLVARCHARSupervisory Rel Cd Non Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SupervisoryRelCdNonCtrlSUPERVISORY_REL_CD_CTRLVARCHARSupervisory Rel Cd Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/SupervisoryRelCdCtrlCOLLABORATIVE_NONCTRLVARCHARCollaborative Non ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/CollaborativeNonctrlCOLLABORATIVE_CTRLVARCHARCollaborative ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/CollaborativeCtrlINCLUSIONARYVARCHARInclusionaryconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/InclusionaryEXCLUSIONARYVARCHARExclusionaryconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/ExclusionaryDELEGATION_NON_CTRLVARCHARDelegation Non Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/DelegationNonCtrlDELEGATION_CTRLVARCHARDelegation Ctrlconfiguration/entityTypes/HCP/attributes/License/attributes/Regulatory/attributes/DelegationCtrlCSRReltio URI: configuration/entityTypes/HCP/attributes/CSRMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCSR_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePROFESSION_CODEVARCHARProfession Informationconfiguration/entityTypes/HCP/attributes/CSR/attributes/ProfessionCodeHCPProfessionAUTHORIZATION_NUMBERVARCHARAutorization number of CSRconfiguration/entityTypes/HCP/attributes/CSR/attributes/AuthorizationNumberREG_AUTH_IDVARCHARRegAuthIDconfiguration/entityTypes/HCP/attributes/CSR/attributes/RegAuthIDSTATE_BOARDVARCHARState Boardconfiguration/entityTypes/HCP/attributes/CSR/attributes/StateBoardSTATE_BOARD_NAMEVARCHARState Board Nameconfiguration/entityTypes/HCP/attributes/CSR/attributes/StateBoardNameSTATEVARCHARState of CSR.configuration/entityTypes/HCP/attributes/CSR/attributes/StateCSR_LICENSE_TYPEVARCHARCSR License Typeconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseTypeCSR_LICENSE_TYPE_NAMEVARCHARCSR License Type Nameconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseTypeNameCSR_LICENSE_PRIVILEGEVARCHARCSR License Privilegeconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicensePrivilegeCSR_LICENSE_PRIVILEGE_NAMEVARCHARCSR License Privilege Nameconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicensePrivilegeNameCSR_LICENSE_EFFECTIVE_DATEDATECSR License Effective Dateconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseEffectiveDateCSR_LICENSE_EXPIRATION_DATEDATECSR License Expiration Dateconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseExpirationDateCSR_LICENSE_STATUSVARCHARCSR License Statusconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseStatusHCPLicenseStatusSTATUS_DETAILVARCHARCSRLicenseDeactivationReasonconfiguration/entityTypes/HCP/attributes/CSR/attributes/StatusDetailHCPLicenseStatusDetailCSR_LICENSE_DEACTIVATION_REASONVARCHARCSR License Deactivation Reasonconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseDeactivationReasonHCPCSRLicenseDeactivationReasonCSR_LICENSE_CERTIFICATIONVARCHARCSR License Certificationconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseCertificationHCPLicenseCertificationCSR_LICENSE_TYPE_PRIVILEGE_RANKVARCHARCSR License Type Privilege Rankconfiguration/entityTypes/HCP/attributes/CSR/attributes/CSRLicenseTypePrivilegeRankCSR_REGULATORYCSR RegulatoryReltio URI: configuration/entityTypes/HCP/attributes/CSR/attributes/RegulatoryMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCSR_URIVARCHARGenerated KeyREGULATORY_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeREQ_SAMPL_NON_CTRLVARCHARReq Sampl Non Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/ReqSamplNonCtrlREQ_SAMPL_CTRLVARCHARReq Sampl Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/ReqSamplCtrlRECV_SAMPL_NON_CTRLVARCHARRecv Sampl Non Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/RecvSamplNonCtrlRECV_SAMPL_CTRLVARCHARRecv Sampl Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/RecvSamplCtrlDISTR_SAMPL_NON_CTRLVARCHARDistr Sampl Non Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/DistrSamplNonCtrlDISTR_SAMPL_CTRLVARCHARDistr Sampl Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/DistrSamplCtrlSAMP_DRUG_SCHED_I_FLAGVARCHARSamp Drug Sched I Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedIFlagSAMP_DRUG_SCHED_II_FLAGVARCHARSamp Drug Sched II Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedIIFlagSAMP_DRUG_SCHED_III_FLAGVARCHARSamp Drug Sched III Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedIIIFlagSAMP_DRUG_SCHED_IV_FLAGVARCHARSamp Drug Sched IV Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedIVFlagSAMP_DRUG_SCHED_V_FLAGVARCHARSamp Drug Sched V Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedVFlagSAMP_DRUG_SCHED_VI_FLAGVARCHARSamp Drug Sched VI Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SampDrugSchedVIFlagPRESCR_NON_CTRL_FLAGVARCHARPrescr Non Ctrl Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrNonCtrlFlagPRESCR_APP_REQ_NON_CTRL_FLAGVARCHARPrescr App Req Non Ctrl Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrAppReqNonCtrlFlagPRESCR_CTRL_FLAGVARCHARPrescr Ctrl Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrCtrlFlagPRESCR_APP_REQ_CTRL_FLAGVARCHARPrescr App Req Ctrl Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrAppReqCtrlFlagPRESCR_DRUG_SCHED_I_FLAGVARCHARPrescr Drug Sched I Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedIFlagPRESCR_DRUG_SCHED_II_FLAGVARCHARPrescr Drug Sched II Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedIIFlagPRESCR_DRUG_SCHED_III_FLAGVARCHARPrescr Drug Sched III Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedIIIFlagPRESCR_DRUG_SCHED_IV_FLAGVARCHARPrescr Drug Sched IV Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedIVFlagPRESCR_DRUG_SCHED_V_FLAGVARCHARPrescr Drug Sched V Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedVFlagPRESCR_DRUG_SCHED_VI_FLAGVARCHARPrescr Drug Sched VI Flagconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/PrescrDrugSchedVIFlagSUPERVISORY_REL_CD_NON_CTRLVARCHARSupervisory Rel Cd Non Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SupervisoryRelCdNonCtrlSUPERVISORY_REL_CD_CTRLVARCHARSupervisory Rel Cd Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/SupervisoryRelCdCtrlCOLLABORATIVE_NONCTRLVARCHARCollaborative Non ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/CollaborativeNonctrlCOLLABORATIVE_CTRLVARCHARCollaborative ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/CollaborativeCtrlINCLUSIONARYVARCHARInclusionaryconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/InclusionaryEXCLUSIONARYVARCHARExclusionaryconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/ExclusionaryDELEGATION_NON_CTRLVARCHARDelegation Non Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/DelegationNonCtrlDELEGATION_CTRLVARCHARDelegation Ctrlconfiguration/entityTypes/HCP/attributes/CSR/attributes/Regulatory/attributes/DelegationCtrlPRIVACY_PREFERENCESReltio URI: configuration/entityTypes/HCP/attributes/PrivacyPreferencesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePRIVACY_PREFERENCES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeAMA_NO_CONTACTBOOLEANCan be Contacted through AMA or notconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/AMANoContactFTC_NO_CONTACTBOOLEANCan be Contacted through FTC or notconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/FTCNoContactPDRPBOOLEANPhysician Data Restriction Program enrolled or notconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PDRPPDRP_DATEDATEPhysician Data Restriction Program enrolment dateconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PDRPDateOPT_OUT_START_DATEDATEOpt Out Start Dateconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/OptOutStartDateALLOWED_TO_CONTACTBOOLEANIndicator whether allowed to contactconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/AllowedToContactPHONE_OPT_OUTBOOLEANOpted Out for being contacted on Phone or notconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/PhoneOptOutEMAIL_OPT_OUTBOOLEANOpted Out for being contacted through Email or notconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/EmailOptOutFAX_OPT_OUTBOOLEANOpted Out for being contacted through Fax or notconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/FaxOptOutMAIL_OPT_OUTBOOLEANOpted Out for being contacted through Mail or notconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/MailOptOutNO_CONTACT_REASONVARCHARReason for no contactconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/NoContactReasonNO_CONTACT_EFFECTIVE_DATEDATEEffective date of no contactconfiguration/entityTypes/HCP/attributes/PrivacyPreferences/attributes/NoContactEffectiveDateCERTIFICATESReltio URI: configuration/entityTypes/HCP/attributes/CertificatesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCERTIFICATES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCERTIFICATE_IDVARCHARCertificate Id of Certificate received by HCPconfiguration/entityTypes/HCP/attributes/Certificates/attributes/CertificateIdSPEAKERReltio URI: configuration/entityTypes/HCP/attributes/SpeakerMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPEAKER_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeLEVELVARCHARLevelconfiguration/entityTypes/HCP/attributes/Speaker/attributes/LevelHCPTierLevelTIER_STATUSVARCHARTier Statusconfiguration/entityTypes/HCP/attributes/Speaker/attributes/TierStatusHCPTierStatusTIER_APPROVAL_DATEDATETier Approval Dateconfiguration/entityTypes/HCP/attributes/Speaker/attributes/TierApprovalDateTIER_UPDATED_DATEDATETier Updated Dateconfiguration/entityTypes/HCP/attributes/Speaker/attributes/TierUpdatedDateTIER_APPROVERVARCHARTier Approverconfiguration/entityTypes/HCP/attributes/Speaker/attributes/TierApproverEFFECTIVE_DATEDATESpeaker Effective Dateconfiguration/entityTypes/HCP/attributes/Speaker/attributes/EffectiveDateDEACTIVATE_REASONVARCHARSpeaker Deactivate Reasonconfiguration/entityTypes/HCP/attributes/Speaker/attributes/DeactivateReasonIS_SPEAKERBOOLEANconfiguration/entityTypes/HCP/attributes/Speaker/attributes/IsSpeakerSPEAKER_TIER_RATIONALETier RationaleReltio URI: configuration/entityTypes/HCP/attributes/Speaker/attributes/TierRationaleMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSPEAKER_URIVARCHARGenerated KeyTIER_RATIONALE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTIER_RATIONALEVARCHARTier Rationaleconfiguration/entityTypes/HCP/attributes/Speaker/attributes/TierRationale/attributes/TierRationaleHCPTierRationalRAWDEAReltio URI: configuration/entityTypes/HCP/attributes/RAWDEAMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameRAWDEA_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDEA_NUMBERVARCHARRAW DEA Numberconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/DEANumberDEA_BUSINESS_ACTIVITYVARCHARDEA Business Activityconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/DEABusinessActivityEFFECTIVE_DATEDATERAW DEA Effective Dateconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/EffectiveDateEXPIRATION_DATEDATERAW DEA Expiration Dateconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/ExpirationDateNAMEVARCHARRAW DEA Nameconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/NameADDITIONAL_COMPANY_INFOVARCHARAdditional Company Infoconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/AdditionalCompanyInfoADDRESS1VARCHARRAW DEA Address 1configuration/entityTypes/HCP/attributes/RAWDEA/attributes/Address1ADDRESS2VARCHARRAW DEA Address 2configuration/entityTypes/HCP/attributes/RAWDEA/attributes/Address2CITYVARCHARRAW DEA Cityconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/CitySTATEVARCHARRAW DEA Stateconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/StateZIPVARCHARRAW DEA Zipconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/ZipBUSINESS_ACTIVITY_SUB_CDVARCHARBusiness Activity Sub Cdconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/BusinessActivitySubCdPAYMT_INDVARCHARPaymt Indicatorconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/PaymtIndHCPRAWDEAPaymtIndRAW_DEA_SCHD_CLAS_CDVARCHARRaw Dea Schd Clas Cdconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/RawDeaSchdClasCdSTATUSVARCHARRaw Dea Statusconfiguration/entityTypes/HCP/attributes/RAWDEA/attributes/StatusPHONEReltio URI: configuration/entityTypes/HCP/attributes/Phone, configuration/entityTypes/HCO/attributes/Phone, configuration/entityTypes/MCO/attributes/PhoneMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePHONE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/Type, configuration/entityTypes/HCO/attributes/Phone/attributes/Type, configuration/entityTypes/MCO/attributes/Phone/attributes/TypePhoneTypeNUMBERVARCHARPhone numberconfiguration/entityTypes/HCP/attributes/Phone/attributes/Number, configuration/entityTypes/HCO/attributes/Phone/attributes/Number, configuration/entityTypes/MCO/attributes/Phone/attributes/NumberFORMATTED_NUMBERVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/FormattedNumber, configuration/entityTypes/HCO/attributes/Phone/attributes/FormattedNumber, configuration/entityTypes/MCO/attributes/Phone/attributes/FormattedNumberEXTENSIONVARCHARExtension, if anyconfiguration/entityTypes/HCP/attributes/Phone/attributes/Extension, configuration/entityTypes/HCO/attributes/Phone/attributes/Extension, configuration/entityTypes/MCO/attributes/Phone/attributes/ExtensionRANKVARCHARRank used to assign priority to a Phone numberconfiguration/entityTypes/HCP/attributes/Phone/attributes/Rank, configuration/entityTypes/HCO/attributes/Phone/attributes/Rank, configuration/entityTypes/MCO/attributes/Phone/attributes/RankPHONE_USAGE_TAGVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/PhoneUsageTag, configuration/entityTypes/HCO/attributes/Phone/attributes/PhoneUsageTag, configuration/entityTypes/MCO/attributes/Phone/attributes/PhoneUsageTagPhoneUsageTagUSAGE_TYPEVARCHARUsage Type of a Phone numberconfiguration/entityTypes/HCP/attributes/Phone/attributes/UsageType, configuration/entityTypes/HCO/attributes/Phone/attributes/UsageType, configuration/entityTypes/MCO/attributes/Phone/attributes/UsageTypeAREA_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/AreaCode, configuration/entityTypes/HCO/attributes/Phone/attributes/AreaCode, configuration/entityTypes/MCO/attributes/Phone/attributes/AreaCodeLOCAL_NUMBERVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/LocalNumber, configuration/entityTypes/HCO/attributes/Phone/attributes/LocalNumber, configuration/entityTypes/MCO/attributes/Phone/attributes/LocalNumberVALIDATION_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/ValidationStatus, configuration/entityTypes/HCO/attributes/Phone/attributes/ValidationStatus, configuration/entityTypes/MCO/attributes/Phone/attributes/ValidationStatusLINE_TYPEVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/LineType, configuration/entityTypes/HCO/attributes/Phone/attributes/LineType, configuration/entityTypes/MCO/attributes/Phone/attributes/LineTypeFORMAT_MASKVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/FormatMask, configuration/entityTypes/HCO/attributes/Phone/attributes/FormatMask, configuration/entityTypes/MCO/attributes/Phone/attributes/FormatMaskDIGIT_COUNTVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/DigitCount, configuration/entityTypes/HCO/attributes/Phone/attributes/DigitCount, configuration/entityTypes/MCO/attributes/Phone/attributes/DigitCountGEO_AREAVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/GeoArea, configuration/entityTypes/HCO/attributes/Phone/attributes/GeoArea, configuration/entityTypes/MCO/attributes/Phone/attributes/GeoAreaGEO_COUNTRYVARCHARconfiguration/entityTypes/HCP/attributes/Phone/attributes/GeoCountry, configuration/entityTypes/HCO/attributes/Phone/attributes/GeoCountry, configuration/entityTypes/MCO/attributes/Phone/attributes/GeoCountryCOUNTRY_CODEVARCHARTwo digit code for a Countryconfiguration/entityTypes/HCO/attributes/Phone/attributes/CountryCode, configuration/entityTypes/MCO/attributes/Phone/attributes/CountryCodePHONE_SOURCESourceReltio URI: configuration/entityTypes/HCP/attributes/Phone/attributes/Source, configuration/entityTypes/HCO/attributes/Phone/attributes/Source, configuration/entityTypes/MCO/attributes/Phone/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePHONE_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes/HCP/attributes/Phone/attributes/Source/attributes/SourceName, configuration/entityTypes/HCO/attributes/Phone/attributes/Source/attributes/SourceName, configuration/entityTypes/MCO/attributes/Phone/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes/HCP/attributes/Phone/attributes/Source/attributes/SourceRank, configuration/entityTypes/HCO/attributes/Phone/attributes/Source/attributes/SourceRank, configuration/entityTypes/MCO/attributes/Phone/attributes/Source/attributes/SourceRankSOURCE_ADDRESS_IDVARCHARSourceAddressIDconfiguration/entityTypes/HCP/attributes/Phone/attributes/Source/attributes/SourceAddressID, configuration/entityTypes/HCO/attributes/Phone/attributes/Source/attributes/SourceAddressID, configuration/entityTypes/MCO/attributes/Phone/attributes/Source/attributes/SourceAddressIDHCP_ADDRESS_ZIPReltio URI: configuration/entityTypes/Location/attributes/ZipMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARGenerated KeyZIP_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePOSTAL_CODEVARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/PostalCodeZIP5VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip5ZIP4VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip4DEAReltio URI: configuration/entityTypes/HCP/attributes/DEA, configuration/entityTypes/HCO/attributes/DEAMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDEA_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNUMBERVARCHARconfiguration/entityTypes/HCP/attributes/DEA/attributes/Number, configuration/entityTypes/HCO/attributes/DEA/attributes/NumberSTATUSVARCHARconfiguration/entityTypes/HCP/attributes/DEA/attributes/Status, configuration/entityTypes/HCO/attributes/DEA/attributes/StatusSTATUSVARCHARconfiguration/entityTypes/HCP/attributes/DEA/attributes/Status, configuration/entityTypes/HCO/attributes/DEA/attributes/StatusApp-LSCustomer360DEAStatusEXPIRATION_DATEDATEconfiguration/entityTypes/HCP/attributes/DEA/attributes/ExpirationDate, configuration/entityTypes/HCO/attributes/DEA/attributes/ExpirationDateDRUG_SCHEDULEVARCHARconfiguration/entityTypes/HCP/attributes/DEA/attributes/DrugSchedule, configuration/entityTypes/HCO/attributes/DEA/attributes/DrugScheduleApp-LSCustomer360DEADrugScheduleDRUG_SCHEDULE_DESCRIPTIONVARCHARconfiguration/entityTypes/HCP/attributes/DEA/attributes/DrugScheduleDescription, configuration/entityTypes/HCO/attributes/DEA/attributes/DrugScheduleDescriptionBUSINESS_ACTIVITYVARCHARconfiguration/entityTypes/HCP/attributes/DEA/attributes/BusinessActivity, configuration/entityTypes/HCO/attributes/DEA/attributes/BusinessActivityApp-LSCustomer360DEABusinessActivityBUSINESS_ACTIVITY_PLUS_SUB_CODEVARCHARBusiness Activity SubCodeconfiguration/entityTypes/HCP/attributes/DEA/attributes/BusinessActivityPlusSubCode, configuration/entityTypes/HCO/attributes/DEA/attributes/BusinessActivityPlusSubCodeApp-LSCustomer360DEABusinessActivitySubcodeBUSINESS_ACTIVITY_DESCRIPTIONVARCHARStringconfiguration/entityTypes/HCP/attributes/DEA/attributes/BusinessActivityDescription, configuration/entityTypes/HCO/attributes/DEA/attributes/BusinessActivityDescriptionApp-LSCustomer360DEABusinessActivityDescriptionPAYMENT_INDICATORVARCHARStringconfiguration/entityTypes/HCP/attributes/DEA/attributes/PaymentIndicator, configuration/entityTypes/HCO/attributes/DEA/attributes/PaymentIndicatorApp-LSCustomer360DEAPaymentIndicatorTAXONOMYReltio URI: configuration/entityTypes/HCP/attributes/Taxonomy, configuration/entityTypes/HCO/attributes/TaxonomyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTAXONOMY_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTAXONOMYVARCHARTaxonomy related to HCP, e.g., Obstetrics & Gynecologyconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Taxonomy, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/TaxonomyApp-LSCustomer360Taxonomy,TAXONOMY_CDTYPEVARCHARType of Taxonomy, e.g., Primaryconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Type, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/TypeApp-LSCustomer360TaxonomyType,TAXONOMY_TYPESTATE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/StateCodeGROUPVARCHARGroup Taxonomy belongs toconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/GroupPROVIDER_TYPEVARCHARTaxonomy Provider Typeconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/ProviderType, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/ProviderTypeCLASSIFICATIONVARCHARClassification of Taxonomyconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Classification, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/ClassificationSPECIALIZATIONVARCHARSpecialization of Taxonomyconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Specialization, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/SpecializationPRIORITYVARCHARTaxonomy Priorityconfiguration/entityTypes/HCP/attributes/Taxonomy/attributes/Priority, configuration/entityTypes/HCO/attributes/Taxonomy/attributes/PriorityTAXONOMY_PRIORITYSANCTIONReltio URI: configuration/entityTypes/HCP/attributes/SanctionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSANCTION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARCourt sanction Id for any case.configuration/entityTypes/HCP/attributes/Sanction/attributes/SanctionIdACTION_CODEVARCHARCourt sanction code for a caseconfiguration/entityTypes/HCP/attributes/Sanction/attributes/ActionCodeACTION_DESCRIPTIONVARCHARCourt sanction Action Descriptionconfiguration/entityTypes/HCP/attributes/Sanction/attributes/ActionDescriptionBOARD_CODEVARCHARCourt case board idconfiguration/entityTypes/HCP/attributes/Sanction/attributes/BoardCodeBOARD_DESCVARCHARcourt case board descriptionconfiguration/entityTypes/HCP/attributes/Sanction/attributes/BoardDescACTION_DATEDATECourt sanction Action Dateconfiguration/entityTypes/HCP/attributes/Sanction/attributes/ActionDateSANCTION_PERIOD_START_DATEDATESanction Period Start Dateconfiguration/entityTypes/HCP/attributes/Sanction/attributes/SanctionPeriodStartDateSANCTION_PERIOD_END_DATEDATESanction Period End Dateconfiguration/entityTypes/HCP/attributes/Sanction/attributes/SanctionPeriodEndDateMONTH_DURATIONVARCHARSanction Duration in Monthsconfiguration/entityTypes/HCP/attributes/Sanction/attributes/MonthDurationFINE_AMOUNTVARCHARFine Amount for Sanctionconfiguration/entityTypes/HCP/attributes/Sanction/attributes/FineAmountOFFENSE_CODEVARCHAROffense Code for Sanctionconfiguration/entityTypes/HCP/attributes/Sanction/attributes/OffenseCodeOFFENSE_DESCRIPTIONVARCHAROffense Description for Sanctionconfiguration/entityTypes/HCP/attributes/Sanction/attributes/OffenseDescriptionOFFENSE_DATEDATEOffense Date for Sanctionconfiguration/entityTypes/HCP/attributes/Sanction/attributes/OffenseDateGSA_SANCTIONReltio URI: configuration/entityTypes/HCP/attributes/GSASanctionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameGSA_SANCTION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARSanction Id of HCP as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/SanctionIdFIRST_NAMEVARCHARFirst Name of HCP as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/FirstNameMIDDLE_NAMEVARCHARMiddle Name of HCP as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/MiddleNameLAST_NAMEVARCHARLast Name of HCP as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/LastNameSUFFIX_NAMEVARCHARSuffix Name of HCP as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/SuffixNameCITYVARCHARCity of HCP as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/CitySTATEVARCHARState of HCP as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/StateZIPVARCHARZip of HCP as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/ZipACTION_DATEVARCHARAction Date for GSA Sactionconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/ActionDateTERM_DATEVARCHARTerm Date for GSA Sactionconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/TermDateAGENCYVARCHARAgency that imposed Sanctionconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/AgencyCONFIDENCEVARCHARConfidence as per GSA Saction listconfiguration/entityTypes/HCP/attributes/GSASanction/attributes/ConfidenceMULTI_CHANNEL_COMMUNICATION_CONSENTReltio URI: configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsentMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameMULTI_CHANNEL_COMMUNICATION_CONSENT_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCHANNEL_TYPEVARCHARChannel type for the consent, e.g. email, SMS, etc.configuration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelTypeCHANNEL_VALUEVARCHARValue of the channel for consent - john.doe@email.comconfiguration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelValueCHANNEL_CONSENTVARCHARThe consent for the corresponding channel and the id - yes or noconfiguration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelConsentChannelConsentSTART_DATEDATEStart date of the consentconfiguration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/StartDateEXPIRATION_DATEDATEExpiration date of the consentconfiguration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ExpirationDateCOMMUNICATION_TYPEVARCHARDifferent communication type that the individual prefers, for e.g. - New Product Launches, Sales/Discounts, Brand-level Newsconfiguration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/CommunicationTypeCOMMUNICATION_FREQUENCYVARCHARHow frequently can the individual be communicated to. Example - Daily/monthly/weeklyconfiguration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/CommunicationFrequencyCHANNEL_PREFERENCE_FLAGBOOLEANWhen checked denotes the preferred channel of communicationconfiguration/entityTypes/HCP/attributes/MultiChannelCommunicationConsent/attributes/ChannelPreferenceFlagEMPLOYMENTReltio URI: configuration/entityTypes/HCP/attributes/EmploymentMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMPLOYMENT_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeNAMEVARCHARNameconfiguration/entityTypes/Organization/attributes/NameTITLEVARCHARconfiguration/relationTypes/Employment/attributes/TitleSUMMARYVARCHARconfiguration/relationTypes/Employment/attributes/SummaryIS_CURRENTBOOLEANconfiguration/relationTypes/Employment/attributes/IsCurrentHCOHealth care organizationReltio URI: configuration/entityTypes/HCOMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPE_CODEVARCHARType Codeconfiguration/entityTypes/HCO/attributes/TypeCodeHCOTypeCOMPANY_CUST_IDVARCHARCOMPANY Customer IDconfiguration/entityTypes/HCO/attributes/COMPANYCustIDSUB_TYPE_CODEVARCHARSubType Codeconfiguration/entityTypes/HCO/attributes/SubTypeCodeHCOSubTypeSUB_CATEGORYVARCHARSubCategoryconfiguration/entityTypes/HCO/attributes/SubCategoryHCOSubCategorySTRUCTURE_TYPE_CODEVARCHARSubType Codeconfiguration/entityTypes/HCO/attributes/StructureTypeCodeHCOStructureTypeCodeNAMEVARCHARNameconfiguration/entityTypes/HCO/attributes/NameDOING_BUSINESS_AS_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/DoingBusinessAsNameFLEX_RESTRICTED_PARTY_INDVARCHARparty indicator for FLEXconfiguration/entityTypes/HCO/attributes/FlexRestrictedPartyIndTRADE_PARTNERVARCHARStringconfiguration/entityTypes/HCO/attributes/TradePartnerSHIP_TO_SR_PARENT_NAMEVARCHARStringconfiguration/entityTypes/HCO/attributes/ShipToSrParentNameSHIP_TO_JR_PARENT_NAMEVARCHARStringconfiguration/entityTypes/HCO/attributes/ShipToJrParentNameSHIP_FROM_JR_PARENT_NAMEVARCHARStringconfiguration/entityTypes/HCO/attributes/ShipFromJrParentNameTEACHING_HOSPITALVARCHARTeaching Hospitalconfiguration/entityTypes/HCO/attributes/TeachingHospitalOWNERSHIP_STATUSVARCHARconfiguration/entityTypes/HCO/attributes/OwnershipStatusHCOOwnershipStatusPROFIT_STATUSVARCHARProfit Statusconfiguration/entityTypes/HCO/attributes/ProfitStatusHCOProfitStatusCMIVARCHARCMIconfiguration/entityTypes/HCO/attributes/CMICOMPANY_HCOS_FLAGVARCHARCOMPANY HCOS Flagconfiguration/entityTypes/HCO/attributes/COMPANYHCOSFlagSOURCE_MATCH_CATEGORYVARCHARSource Match Categoryconfiguration/entityTypes/HCO/attributes/SourceMatchCategoryCOMM_HOSPVARCHARCommHospconfiguration/entityTypes/HCO/attributes/CommHospGEN_FIRSTVARCHARStringconfiguration/entityTypes/HCO/attributes/GenFirstHCOGenFirstSREP_ACCESSVARCHARStringconfiguration/entityTypes/HCO/attributes/SrepAccessHCOSrepAccessOUT_PATIENTS_NUMBERSVARCHARconfiguration/entityTypes/HCO/attributes/OutPatientsNumbersUNIT_OPER_ROOM_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/UnitOperRoomNumberPRIMARY_GPOVARCHARPrimary GPOconfiguration/entityTypes/HCO/attributes/PrimaryGPOTOTAL_PRESCRIBERSVARCHARTotal Prescribersconfiguration/entityTypes/HCO/attributes/TotalPrescribersNUM_IN_PATIENTSVARCHARTotal InPatientsconfiguration/entityTypes/HCO/attributes/NumInPatientsTOTAL_LIVESVARCHARTotal Livesconfiguration/entityTypes/HCO/attributes/TotalLivesTOTAL_PHARMACISTSVARCHARTotal Pharmacistsconfiguration/entityTypes/HCO/attributes/TotalPharmacistsTOTAL_M_DSVARCHARTotal MDsconfiguration/entityTypes/HCO/attributes/TotalMDsTOTAL_REVENUEVARCHARTotal Revenueconfiguration/entityTypes/HCO/attributes/TotalRevenueSTATUSVARCHARconfiguration/entityTypes/HCO/attributes/StatusHCOStatusSTATUS_DETAILVARCHARDeactivation Reasonconfiguration/entityTypes/HCO/attributes/StatusDetailHCOStatusDetailACCOUNT_BLOCK_CODEVARCHARAccount Block Codeconfiguration/entityTypes/HCO/attributes/AccountBlockCodeTOTAL_LICENSE_BEDSVARCHARTotal License Bedsconfiguration/entityTypes/HCO/attributes/TotalLicenseBedsTOTAL_CENSUS_BEDSVARCHARconfiguration/entityTypes/HCO/attributes/TotalCensusBedsTOTAL_STAFFED_BEDSVARCHARconfiguration/entityTypes/HCO/attributes/TotalStaffedBedsTOTAL_SURGERIESVARCHARTotal Surgeriesconfiguration/entityTypes/HCO/attributes/TotalSurgeriesTOTAL_PROCEDURESVARCHARTotal Proceduresconfiguration/entityTypes/HCO/attributes/TotalProceduresNUM_EMPLOYEESVARCHARNumber of Proceduresconfiguration/entityTypes/HCO/attributes/NumEmployeesRESIDENT_COUNTVARCHARResident Countconfiguration/entityTypes/HCO/attributes/ResidentCountFORMULARYVARCHARFormularyconfiguration/entityTypes/HCO/attributes/FormularyHCOFormularyE_MEDICAL_RECORDVARCHARe-Medical Recordconfiguration/entityTypes/HCO/attributes/EMedicalRecordE_PRESCRIBEVARCHARe-Prescribeconfiguration/entityTypes/HCO/attributes/EPrescribeHCOEPrescribePAY_PERFORMVARCHARPay Performconfiguration/entityTypes/HCO/attributes/PayPerformHCOPayPerformDEACTIVATION_REASONVARCHARDeactivation Reasonconfiguration/entityTypes/HCO/attributes/DeactivationReasonHCODeactivationReasonINTERNATIONAL_LOCATION_NUMBERVARCHARInternational location number (part 1)configuration/entityTypes/HCO/attributes/InternationalLocationNumberDCR_STATUSVARCHARStatus of HCO profileconfiguration/entityTypes/HCO/attributes/DCRStatusDCRStatusCOUNTRY_HCOVARCHARCountryconfiguration/entityTypes/HCO/attributes/CountryORIGINAL_SOURCE_NAMEVARCHAROriginal Sourceconfiguration/entityTypes/HCO/attributes/OriginalSourceNameSOURCE_UPDATE_DATEDATEconfiguration/entityTypes/HCO/attributes/SourceUpdateDateCLASSOF_TRADE_NReltio URI: configuration/entityTypes/HCO/attributes/ClassofTradeNMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameCLASSOF_TRADE_N_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_COTIDVARCHARSource COT IDconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/SourceCOTIDCOTPRIORITYVARCHARPriorityconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/PrioritySPECIALTYVARCHARSpecialty of Class of Tradeconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/SpecialtyCOTSpecialtyCLASSIFICATIONVARCHARClassification of Class of Tradeconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/ClassificationCOTClassificationFACILITY_TYPEVARCHARFacility Type of Class of Tradeconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityTypeCOTFacilityTypeCOT_ORDERVARCHARCOT Orderconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/COTOrderSTART_DATEDATEStart Dateconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/StartDateSOURCEVARCHARSourceconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/SourcePRIMARYVARCHARPrimaryconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/PrimaryHCO_ADDRESS_ZIPReltio URI: configuration/entityTypes/Location/attributes/ZipMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameADDRESS_URIVARCHARGenerated KeyZIP_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePOSTAL_CODEVARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/PostalCodeZIP5VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip5ZIP4VARCHARconfiguration/entityTypes/Location/attributes/Zip/attributes/Zip4340BReltio URI: configuration/entityTypes/HCO/attributes/340bMaterialized: noColumnTypeDescriptionReltio Attribute URILOV Name340B_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity Type340BIDVARCHAR340B IDconfiguration/entityTypes/HCO/attributes/340b/attributes/340BIDENTITY_SUB_DIVISION_NAMEVARCHAREntity Sub-Division Nameconfiguration/entityTypes/HCO/attributes/340b/attributes/EntitySubDivisionNamePROGRAM_CODEVARCHARProgram Codeconfiguration/entityTypes/HCO/attributes/340b/attributes/ProgramCode340BProgramCodePARTICIPATINGBOOLEANParticipatingconfiguration/entityTypes/HCO/attributes/340b/attributes/ParticipatingAUTHORIZING_OFFICIAL_NAMEVARCHARAuthorizing Official Nameconfiguration/entityTypes/HCO/attributes/340b/attributes/AuthorizingOfficialNameAUTHORIZING_OFFICIAL_TITLEVARCHARAuthorizing Official Titleconfiguration/entityTypes/HCO/attributes/340b/attributes/AuthorizingOfficialTitleAUTHORIZING_OFFICIAL_TELVARCHARAuthorizing Official Telconfiguration/entityTypes/HCO/attributes/340b/attributes/AuthorizingOfficialTelAUTHORIZING_OFFICIAL_TEL_EXTVARCHARAuthorizing Official Tel Extconfiguration/entityTypes/HCO/attributes/340b/attributes/AuthorizingOfficialTelExtCONTACT_NAMEVARCHARContact Nameconfiguration/entityTypes/HCO/attributes/340b/attributes/ContactNameCONTACT_TITLEVARCHARContact Titleconfiguration/entityTypes/HCO/attributes/340b/attributes/ContactTitleCONTACT_TELEPHONEVARCHARContact Telephoneconfiguration/entityTypes/HCO/attributes/340b/attributes/ContactTelephoneCONTACT_TELEPHONE_EXTVARCHARContact Telephone Extconfiguration/entityTypes/HCO/attributes/340b/attributes/ContactTelephoneExtSIGNED_BY_NAMEVARCHARSigned By Nameconfiguration/entityTypes/HCO/attributes/340b/attributes/SignedByNameSIGNED_BY_TITLEVARCHARSigned By Titleconfiguration/entityTypes/HCO/attributes/340b/attributes/SignedByTitleSIGNED_BY_TELEPHONEVARCHARSigned By Telephoneconfiguration/entityTypes/HCO/attributes/340b/attributes/SignedByTelephoneSIGNED_BY_TELEPHONE_EXTVARCHARSigned By Telephone Extconfiguration/entityTypes/HCO/attributes/340b/attributes/SignedByTelephoneExtSIGNED_BY_DATEDATESigned By Dateconfiguration/entityTypes/HCO/attributes/340b/attributes/SignedByDateCERTIFIED_DECERTIFIED_DATEDATECertified/Decertified Dateconfiguration/entityTypes/HCO/attributes/340b/attributes/CertifiedDecertifiedDateRURALVARCHARRuralconfiguration/entityTypes/HCO/attributes/340b/attributes/RuralENTRY_COMMENTSVARCHAREntry Commentsconfiguration/entityTypes/HCO/attributes/340b/attributes/EntryCommentsNATURE_OF_SUPPORTVARCHARNature Of Supportconfiguration/entityTypes/HCO/attributes/340b/attributes/NatureOfSupportEDIT_DATEVARCHAREdit Dateconfiguration/entityTypes/HCO/attributes/340b/attributes/EditDate340B_PARTICIPATION_DATESReltio URI: configuration/entityTypes/HCO/attributes/340b/attributes/ParticipationDatesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV Name340B_URIVARCHARGenerated KeyPARTICIPATION_DATES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypePARTICIPATING_START_DATEDATEParticipating Start Dateconfiguration/entityTypes/HCO/attributes/340b/attributes/ParticipationDates/attributes/ParticipatingStartDateTERMINATION_DATEDATETermination Dateconfiguration/entityTypes/HCO/attributes/340b/attributes/ParticipationDates/attributes/TerminationDateTERMINATION_CODEVARCHARTermination Codeconfiguration/entityTypes/HCO/attributes/340b/attributes/ParticipationDates/attributes/TerminationCode340BTerminationCodeOTHER_NAMESReltio URI: configuration/entityTypes/HCO/attributes/OtherNamesMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameOTHER_NAMES_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARTypeconfiguration/entityTypes/HCO/attributes/OtherNames/attributes/TypeNAMEVARCHARNameconfiguration/entityTypes/HCO/attributes/OtherNames/attributes/NameACOReltio URI: configuration/entityTypes/HCO/attributes/ACOMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACO_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARTypeconfiguration/entityTypes/HCO/attributes/ACO/attributes/TypeHCOACOTypeACO_TYPE_CATEGORYVARCHARType Categoryconfiguration/entityTypes/HCO/attributes/ACO/attributes/ACOTypeCategoryHCOACOTypeCategoryACO_TYPE_GROUPVARCHARType Group of ACOconfiguration/entityTypes/HCO/attributes/ACO/attributes/ACOTypeGroupHCOACOTypeGroupACO_ACODETAILReltio URI: configuration/entityTypes/HCO/attributes/ACO/attributes/ACODetailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACO_URIVARCHARGenerated KeyACO_DETAIL_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeACO_DETAIL_CODEVARCHARDetail Code for ACOconfiguration/entityTypes/HCO/attributes/ACO/attributes/ACODetail/attributes/ACODetailCodeHCOACODetailACO_DETAIL_VALUEVARCHARDetail Value for ACOconfiguration/entityTypes/HCO/attributes/ACO/attributes/ACODetail/attributes/ACODetailValueACO_DETAIL_GROUP_CODEVARCHARDetail Value for ACOconfiguration/entityTypes/HCO/attributes/ACO/attributes/ACODetail/attributes/ACODetailGroupCodeHCOACODetailGroupWEBSITEReltio URI: configuration/entityTypes/HCO/attributes/WebsiteMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameWEBSITE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeWEBSITE_URLVARCHARUrl of the websiteconfiguration/entityTypes/HCO/attributes/Website/attributes/WebsiteURLWEBSITE_SOURCESourceReltio URI: configuration/entityTypes/HCO/attributes/Website/attributes/SourceMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameWEBSITE_URIVARCHARGenerated KeySOURCE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSOURCE_NAMEVARCHARSourceNameconfiguration/entityTypes/HCO/attributes/Website/attributes/Source/attributes/SourceNameSOURCE_RANKVARCHARSourceRankconfiguration/entityTypes/HCO/attributes/Website/attributes/Source/attributes/SourceRankSALES_ORGANIZATIONSales OrganizationReltio URI: configuration/entityTypes/HCO/attributes/SalesOrganizationMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameSALES_ORGANIZATION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSALES_ORGANIZATION_CODEVARCHARSales Organization Codeconfiguration/entityTypes/HCO/attributes/SalesOrganization/attributes/SalesOrganizationCodeCUSTOMER_ORDER_BLOCKVARCHARCustomer Order Blockconfiguration/entityTypes/HCO/attributes/SalesOrganization/attributes/CustomerOrderBlockCUSTOMER_GROUPVARCHARCustomer Groupconfiguration/entityTypes/HCO/attributes/SalesOrganization/attributes/CustomerGroupHCO_BUSINESS_UNIT_TAGReltio URI: configuration/entityTypes/HCO/attributes/BusinessUnitTAGMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBUSINESSUNITTAG_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeBUSINESS_UNITVARCHARBusiness Unitconfiguration/entityTypes/HCO/attributes/BusinessUnitTAG/attributes/BusinessUnitSEGMENTVARCHARSegmentconfiguration/entityTypes/HCO/attributes/BusinessUnitTAG/attributes/SegmentCONTRACT_TYPEVARCHARContract Typeconfiguration/entityTypes/HCO/attributes/BusinessUnitTAG/attributes/ContractTypeGLNReltio URI: configuration/entityTypes/HCO/attributes/GLNMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameGLN_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARGLN Typeconfiguration/entityTypes/HCO/attributes/GLN/attributes/TypeIDVARCHARGLN IDconfiguration/entityTypes/HCO/attributes/GLN/attributes/IDSTATUSVARCHARGLN Statusconfiguration/entityTypes/HCO/attributes/GLN/attributes/StatusHCOGLNStatusSTATUS_DETAILVARCHARGLN Statusconfiguration/entityTypes/HCO/attributes/GLN/attributes/StatusDetailHCOGLNStatusDetailHCO_REFER_BACKReltio URI: configuration/entityTypes/HCO/attributes/ReferBackMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameREFERBACK_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeREFER_BACK_IDVARCHARRefer Back IDconfiguration/entityTypes/HCO/attributes/ReferBack/attributes/ReferBackIDREFER_BACK_HCOSIDVARCHARGLN IDconfiguration/entityTypes/HCO/attributes/ReferBack/attributes/ReferBackHCOSIDDEACTIVATION_REASONVARCHARDeactivation Reasonconfiguration/entityTypes/HCO/attributes/ReferBack/attributes/DeactivationReasonBEDReltio URI: configuration/entityTypes/HCO/attributes/BedMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBED_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTYPEVARCHARTypeconfiguration/entityTypes/HCO/attributes/Bed/attributes/TypeHCOBedTypeLICENSE_BEDSVARCHARLicense Bedsconfiguration/entityTypes/HCO/attributes/Bed/attributes/LicenseBedsCENSUS_BEDSVARCHARCensus Bedsconfiguration/entityTypes/HCO/attributes/Bed/attributes/CensusBedsSTAFFED_BEDSVARCHARStaffed Bedsconfiguration/entityTypes/HCO/attributes/Bed/attributes/StaffedBedsGSA_EXCLUSIONReltio URI: configuration/entityTypes/HCO/attributes/GSAExclusionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameGSA_EXCLUSION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/SanctionIdORGANIZATION_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/OrganizationNameADDRESS_LINE1VARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/AddressLine1ADDRESS_LINE2VARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/AddressLine2CITYVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/CitySTATEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/StateZIPVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/ZipACTION_DATEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/ActionDateTERM_DATEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/TermDateAGENCYVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/AgencyCONFIDENCEVARCHARconfiguration/entityTypes/HCO/attributes/GSAExclusion/attributes/ConfidenceOIG_EXCLUSIONReltio URI: configuration/entityTypes/HCO/attributes/OIGExclusionMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameOIG_EXCLUSION_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSANCTION_IDVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/SanctionIdACTION_CODEVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionCodeACTION_DESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionDescriptionBOARD_CODEVARCHARCourt case board idconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/BoardCodeBOARD_DESCVARCHARcourt case board descriptionconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/BoardDescACTION_DATEDATEconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/ActionDateOFFENSE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/OffenseCodeOFFENSE_DESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/OIGExclusion/attributes/OffenseDescriptionBUSINESS_DETAILReltio URI: configuration/entityTypes/HCO/attributes/BusinessDetailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameBUSINESS_DETAIL_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDETAILVARCHARDetailconfiguration/entityTypes/HCO/attributes/BusinessDetail/attributes/DetailHCOBusinessDetailGROUPVARCHARGroupconfiguration/entityTypes/HCO/attributes/BusinessDetail/attributes/GroupHCOBusinessDetailGroupDETAIL_VALUEVARCHARDetail Valueconfiguration/entityTypes/HCO/attributes/BusinessDetail/attributes/DetailValueDETAIL_COUNTVARCHARDetail Countconfiguration/entityTypes/HCO/attributes/BusinessDetail/attributes/DetailCountHINHINReltio URI: configuration/entityTypes/HCO/attributes/HINMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameHIN_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeHINVARCHARHINconfiguration/entityTypes/HCO/attributes/HIN/attributes/HINTICKERReltio URI: configuration/entityTypes/HCO/attributes/TickerMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTICKER_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeSYMBOLVARCHARconfiguration/entityTypes/HCO/attributes/Ticker/attributes/SymbolSTOCK_EXCHANGEVARCHARconfiguration/entityTypes/HCO/attributes/Ticker/attributes/StockExchangeTRADE_STYLE_NAMEReltio URI: configuration/entityTypes/HCO/attributes/TradeStyleNameMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameTRADE_STYLE_NAME_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeORGANIZATION_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/OrganizationNameLANGUAGE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/LanguageCodeFORMER_ORGANIZATION_PRIMARY_NAMEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/FormerOrganizationPrimaryNameDISPLAY_SEQUENCEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/DisplaySequenceTYPEVARCHARconfiguration/entityTypes/HCO/attributes/TradeStyleName/attributes/TypeHRIOR_DUNS_NUMBERReltio URI: configuration/entityTypes/HCO/attributes/PriorDUNSNUmberMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NamePRIOR_DUNS_NUMBER_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeTRANSFER_DUNS_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferDUNSNumberTRANSFER_REASON_TEXTVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferReasonTextTRANSFER_REASON_CODEVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferReasonCodeTRANSFER_DATEVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferDateTRANSFERRED_FROM_DUNS_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferredFromDUNSNumberTRANSFERRED_TO_DUNS_NUMBERVARCHARconfiguration/entityTypes/HCO/attributes/PriorDUNSNUmber/attributes/TransferredToDUNSNumberINDUSTRY_CODEReltio URI: configuration/entityTypes/HCO/attributes/IndustryCodeMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameINDUSTRY_CODE_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeDNB_CODEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/DNBCodeINDUSTRY_CODEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeINDUSTRY_CODE_DESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeDescriptionINDUSTRY_CODE_LANGUAGE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeLanguageCodeINDUSTRY_CODE_WRITING_SCRIPTVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryCodeWritingScriptDISPLAY_SEQUENCEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/DisplaySequenceSALES_PERCENTAGEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/SalesPercentageTYPEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/TypeINDUSTRY_TYPE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/IndustryTypeCodeIMPORT_EXPORT_AGENTVARCHARconfiguration/entityTypes/HCO/attributes/IndustryCode/attributes/ImportExportAgentACTIVITIES_AND_OPERATIONSReltio URI: configuration/entityTypes/HCO/attributes/ActivitiesAndOperationsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACTIVITIES_AND_OPERATIONS_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeLINE_OF_BUSINESS_DESCRIPTIONVARCHARconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/LineOfBusinessDescriptionLANGUAGE_CODEVARCHARconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/LanguageCodeWRITING_SCRIPT_CODEVARCHARconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/WritingScriptCodeIMPORT_INDICATORBOOLEANconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/ImportIndicatorEXPORT_INDICATORBOOLEANconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/ExportIndicatorAGENT_INDICATORBOOLEANconfiguration/entityTypes/HCO/attributes/ActivitiesAndOperations/attributes/AgentIndicatorEMPLOYEE_DETAILSReltio URI: configuration/entityTypes/HCO/attributes/EmployeeDetailsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameEMPLOYEE_DETAILS_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeINDIVIDUAL_EMPLOYEE_FIGURES_DATEVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualEmployeeFiguresDateINDIVIDUAL_TOTAL_EMPLOYEE_QUANTITYVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualTotalEmployeeQuantityINDIVIDUAL_RELIABILITY_TEXTVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/IndividualReliabilityTextTOTAL_EMPLOYEE_QUANTITYVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/TotalEmployeeQuantityTOTAL_EMPLOYEE_RELIABILITYVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/TotalEmployeeReliabilityPRINCIPALS_INCLUDEDVARCHARconfiguration/entityTypes/HCO/attributes/EmployeeDetails/attributes/PrincipalsIncludedKEY_FINANCIAL_FIGURES_OVERVIEWReltio URI: configuration/entityTypes/HCO/attributes/KeyFinancialFiguresOverviewMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameKEY_FINANCIAL_FIGURES_OVERVIEW_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeFINANCIAL_STATEMENT_TO_DATEDATEconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/FinancialStatementToDateFINANCIAL_PERIOD_DURATIONVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/FinancialPeriodDurationSALES_REVENUE_CURRENCYVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrencySALES_REVENUE_CURRENCY_CODEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueCurrencyCodeSALES_REVENUE_RELIABILITY_CODEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueReliabilityCodeSALES_REVENUE_UNIT_OF_SIZEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueUnitOfSizeSALES_REVENUE_AMOUNTVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesRevenueAmountPROFIT_OR_LOSS_CURRENCYVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossCurrencyPROFIT_OR_LOSS_RELIABILITY_TEXTVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossReliabilityTextPROFIT_OR_LOSS_UNIT_OF_SIZEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossUnitOfSizePROFIT_OR_LOSS_AMOUNTVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/ProfitOrLossAmountSALES_TURNOVER_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/SalesTurnoverGrowthRateSALES3YRY_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Sales3YryGrowthRateSALES5YRY_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Sales5YryGrowthRateEMPLOYEE3YRY_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Employee3YryGrowthRateEMPLOYEE5YRY_GROWTH_RATEVARCHARconfiguration/entityTypes/HCO/attributes/KeyFinancialFiguresOverview/attributes/Employee5YryGrowthRateMATCH_QUALITYReltio URI: configuration/entityTypes/HCO/attributes/MatchQualityMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameMATCH_QUALITY_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCONFIDENCE_CODEVARCHARDnB Match Quality Confidence Codeconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/ConfidenceCodeDISPLAY_SEQUENCEVARCHARDnB Match Quality Display Sequenceconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/DisplaySequenceMATCH_CODEVARCHARconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/MatchCodeBEMFABVARCHARconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/BEMFABMATCH_GRADEVARCHARconfiguration/entityTypes/HCO/attributes/MatchQuality/attributes/MatchGradeORGANIZATION_DETAILReltio URI: configuration/entityTypes/HCO/attributes/OrganizationDetailMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameORGANIZATION_DETAIL_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeMEMBER_ROLEVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/MemberRoleSTANDALONEBOOLEANconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/StandaloneCONTROL_OWNERSHIP_DATEDATEconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/ControlOwnershipDateOPERATING_STATUSVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/OperatingStatusSTART_YEARVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/StartYearFRANCHISE_OPERATION_TYPEVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/FranchiseOperationTypeBONEYARD_ORGANIZATIONBOOLEANconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/BoneyardOrganizationOPERATING_STATUS_COMMENTVARCHARconfiguration/entityTypes/HCO/attributes/OrganizationDetail/attributes/OperatingStatusCommentDUNS_HIERARCHYReltio URI: configuration/entityTypes/HCO/attributes/DUNSHierarchyMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameDUNS_HIERARCHY_URIVARCHARGenerated KeyENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeGLOBAL_ULTIMATE_DUNSVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/GlobalUltimateDUNSGLOBAL_ULTIMATE_ORGANIZATIONVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/GlobalUltimateOrganizationDOMESTIC_ULTIMATE_DUNSVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/DomesticUltimateDUNSDOMESTIC_ULTIMATE_ORGANIZATIONVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/DomesticUltimateOrganizationPARENT_DUNSVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/ParentDUNSPARENT_ORGANIZATIONVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/ParentOrganizationHEADQUARTERS_DUNSVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/HeadquartersDUNSHEADQUARTERS_ORGANIZATIONVARCHARconfiguration/entityTypes/HCO/attributes/DUNSHierarchy/attributes/HeadquartersOrganizationMCOManaged Care OrganizationReltio URI: configuration/entityTypes/MCOMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameENTITY_URIVARCHARReltio Entity URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagENTITY_TYPEVARCHARReltio Entity TypeCOMPANY_CUST_IDVARCHARCOMPANY Customer IDconfiguration/entityTypes/MCO/attributes/COMPANYCustIDNAMEVARCHARNameconfiguration/entityTypes/MCO/attributes/NameTYPEVARCHARTypeconfiguration/entityTypes/MCO/attributes/TypeMCOTypeMANAGED_CARE_CHANNELVARCHARManaged Care Channelconfiguration/entityTypes/MCO/attributes/ManagedCareChannelMCOManagedCareChannelPLAN_MODEL_TYPEVARCHARPlanModelTypeconfiguration/entityTypes/MCO/attributes/PlanModelTypeMCOPlanModelTypeSUB_TYPEVARCHARSubTypeconfiguration/entityTypes/MCO/attributes/SubTypeMCOSubTypeSUB_TYPE2VARCHARSubType2configuration/entityTypes/MCO/attributes/SubType2SUB_TYPE3VARCHARSub Type 3configuration/entityTypes/MCO/attributes/SubType3NUM_LIVES_MEDICAREVARCHARMedicare Number of Livesconfiguration/entityTypes/MCO/attributes/NumLives_MedicareNUM_LIVES_MEDICALVARCHARMedical Number of Livesconfiguration/entityTypes/MCO/attributes/NumLives_MedicalNUM_LIVES_PHARMACYVARCHARPharmacy Number of Livesconfiguration/entityTypes/MCO/attributes/NumLives_PharmacyOPERATING_STATEVARCHARState Operating fromconfiguration/entityTypes/MCO/attributes/Operating_StateORIGINAL_SOURCE_NAMEVARCHAROriginal Source Nameconfiguration/entityTypes/MCO/attributes/OriginalSourceNameDISTRIBUTION_CHANNELVARCHARDistribution Channelconfiguration/entityTypes/MCO/attributes/DistributionChannelACCESS_LANDSCAPE_FORMULARY_CHANNELVARCHARAccess Landscape Formulary Channelconfiguration/entityTypes/MCO/attributes/AccessLandscapeFormularyChannelEFFECTIVE_START_DATEDATEEffective Start Dateconfiguration/entityTypes/MCO/attributes/EffectiveStartDateEFFECTIVE_END_DATEDATEEffective End Dateconfiguration/entityTypes/MCO/attributes/EffectiveEndDateSTATUSVARCHARStatusconfiguration/entityTypes/MCO/attributes/StatusMCOStatusSOURCE_MATCH_CATEGORYVARCHARSource Match Categoryconfiguration/entityTypes/MCO/attributes/SourceMatchCategoryCOUNTRY_MCOVARCHARCountryconfiguration/entityTypes/MCO/attributes/CountryAFFILIATIONSReltio URI: configuration/relationTypes/FlextoDDDAffiliations, configuration/relationTypes/Ownership, configuration/relationTypes/PAYERtoPLAN, configuration/relationTypes/PBMVendortoMCO, configuration/relationTypes/ACOAffiliations, configuration/relationTypes/MCOtoPLAN, configuration/relationTypes/FlextoHCOSAffiliations, configuration/relationTypes/FlextoSAPAffiliations, ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●, configuration/relationTypes/HCOStoDDDAffiliations, configuration/relationTypes/EnterprisetoBOB, configuration/relationTypes/OtherHCOtoHCOAffiliations, configuration/relationTypes/ContactAffiliations, configuration/relationTypes/VAAffiliations, configuration/relationTypes/PBMtoPLAN, configuration/relationTypes/Purchasing, configuration/relationTypes/BOBtoMCO, configuration/relationTypes/DDDtoSAPAffiliations, ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●, configuration/relationTypes/ProviderAffiliations, configuration/relationTypes/SAPtoHCOSAffiliationsMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameRELATION_URIVARCHARReltio Relation URICOUNTRYVARCHARCountry CodeACTIVEVARCHARActive FlagRELATION_TYPEVARCHARReltio Relation TypeSTART_ENTITY_URIVARCHARReltio Start Entity URIEND_ENTITY_URIVARCHARReltio End Entity URISOURCEVARCHARconfiguration/relationTypes/FlextoDDDAffiliations/attributes/Source, configuration/relationTypes/Ownership/attributes/Source, configuration/relationTypes/PAYERtoPLAN/attributes/Source, configuration/relationTypes/PBMVendortoMCO/attributes/Source, configuration/relationTypes/ACOAffiliations/attributes/Source, configuration/relationTypes/MCOtoPLAN/attributes/Source, configuration/relationTypes/FlextoHCOSAffiliations/attributes/Source, configuration/relationTypes/FlextoSAPAffiliations/attributes/Source, configuration/relationTypes/MCOtoMMITORG/attributes/Source, configuration/relationTypes/HCOStoDDDAffiliations/attributes/Source, configuration/relationTypes/EnterprisetoBOB/attributes/Source, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/Source, configuration/relationTypes/ContactAffiliations/attributes/Source, configuration/relationTypes/VAAffiliations/attributes/Source, configuration/relationTypes/PBMtoPLAN/attributes/Source, configuration/relationTypes/Purchasing/attributes/Source, configuration/relationTypes/BOBtoMCO/attributes/Source, configuration/relationTypes/DDDtoSAPAffiliations/attributes/Source, configuration/relationTypes/Distribution/attributes/Source, configuration/relationTypes/ProviderAffiliations/attributes/Source, configuration/relationTypes/SAPtoHCOSAffiliations/attributes/SourceLINKED_BYVARCHARconfiguration/relationTypes/FlextoDDDAffiliations/attributes/LinkedBy, configuration/relationTypes/FlextoHCOSAffiliations/attributes/LinkedBy, configuration/relationTypes/FlextoSAPAffiliations/attributes/LinkedBy, configuration/relationTypes/SAPtoHCOSAffiliations/attributes/LinkedByCOUNTRY_AFFILIATIONSVARCHARconfiguration/relationTypes/FlextoDDDAffiliations/attributes/Country, configuration/relationTypes/Ownership/attributes/Country, configuration/relationTypes/PAYERtoPLAN/attributes/Country, configuration/relationTypes/PBMVendortoMCO/attributes/Country, configuration/relationTypes/ACOAffiliations/attributes/Country, configuration/relationTypes/MCOtoPLAN/attributes/Country, configuration/relationTypes/FlextoHCOSAffiliations/attributes/Country, configuration/relationTypes/FlextoSAPAffiliations/attributes/Country, configuration/relationTypes/MCOtoMMITORG/attributes/Country, configuration/relationTypes/HCOStoDDDAffiliations/attributes/Country, configuration/relationTypes/EnterprisetoBOB/attributes/Country, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/Country, configuration/relationTypes/ContactAffiliations/attributes/Country, configuration/relationTypes/VAAffiliations/attributes/Country, configuration/relationTypes/PBMtoPLAN/attributes/Country, configuration/relationTypes/Purchasing/attributes/Country, configuration/relationTypes/BOBtoMCO/attributes/Country, configuration/relationTypes/DDDtoSAPAffiliations/attributes/Country, configuration/relationTypes/Distribution/attributes/Country, configuration/relationTypes/ProviderAffiliations/attributes/Country, configuration/relationTypes/SAPtoHCOSAffiliations/attributes/CountryAFFILIATION_TYPEVARCHARconfiguration/relationTypes/PAYERtoPLAN/attributes/AffiliationType, configuration/relationTypes/PBMVendortoMCO/attributes/AffiliationType, configuration/relationTypes/MCOtoPLAN/attributes/AffiliationType, configuration/relationTypes/MCOtoMMITORG/attributes/AffiliationType, configuration/relationTypes/EnterprisetoBOB/attributes/AffiliationType, configuration/relationTypes/VAAffiliations/attributes/AffiliationType, configuration/relationTypes/PBMtoPLAN/attributes/AffiliationType, configuration/relationTypes/BOBtoMCO/attributes/AffiliationTypePBM_AFFILIATION_TYPEVARCHARconfiguration/relationTypes/PAYERtoPLAN/attributes/PBMAffiliationType, configuration/relationTypes/PBMVendortoMCO/attributes/PBMAffiliationType, configuration/relationTypes/MCOtoPLAN/attributes/PBMAffiliationType, configuration/relationTypes/MCOtoMMITORG/attributes/PBMAffiliationType, configuration/relationTypes/EnterprisetoBOB/attributes/PBMAffiliationType, configuration/relationTypes/PBMtoPLAN/attributes/PBMAffiliationType, configuration/relationTypes/BOBtoMCO/attributes/PBMAffiliationTypePLAN_MODEL_TYPEVARCHARconfiguration/relationTypes/PAYERtoPLAN/attributes/PlanModelType, configuration/relationTypes/PBMVendortoMCO/attributes/PlanModelType, configuration/relationTypes/MCOtoPLAN/attributes/PlanModelType, configuration/relationTypes/MCOtoMMITORG/attributes/PlanModelType, configuration/relationTypes/EnterprisetoBOB/attributes/PlanModelType, configuration/relationTypes/PBMtoPLAN/attributes/PlanModelType, configuration/relationTypes/BOBtoMCO/attributes/PlanModelTypeMCOPlanModelTypeMANAGED_CARE_CHANNELVARCHARconfiguration/relationTypes/PAYERtoPLAN/attributes/ManagedCareChannel, configuration/relationTypes/PBMVendortoMCO/attributes/ManagedCareChannel, configuration/relationTypes/MCOtoPLAN/attributes/ManagedCareChannel, configuration/relationTypes/MCOtoMMITORG/attributes/ManagedCareChannel, configuration/relationTypes/EnterprisetoBOB/attributes/ManagedCareChannel, configuration/relationTypes/PBMtoPLAN/attributes/ManagedCareChannel, configuration/relationTypes/BOBtoMCO/attributes/ManagedCareChannelMCOManagedCareChannelEFFECTIVE_START_DATEDATEconfiguration/relationTypes/MCOtoPLAN/attributes/EffectiveStartDateEFFECTIVE_END_DATEDATEconfiguration/relationTypes/MCOtoPLAN/attributes/EffectiveEndDateSTATUSVARCHARconfiguration/relationTypes/VAAffiliations/attributes/StatusAFFIL_RELATION_TYPEReltio URI: configuration/relationTypes/Ownership/attributes/RelationType, configuration/relationTypes/ACOAffiliations/attributes/RelationType, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType, configuration/relationTypes/ContactAffiliations/attributes/RelationType, configuration/relationTypes/Purchasing/attributes/RelationType, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType, configuration/relationTypes/Distribution/attributes/RelationType, configuration/relationTypes/ProviderAffiliations/attributes/RelationTypeMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameRELATION_TYPE_URIVARCHARGenerated KeyRELATION_URIVARCHARReltio Relation URIRELATIONSHIP_GROUP_OWNERSHIPVARCHARconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_OWNERSHIPVARCHARconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_ORDERVARCHARconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/RelationshipOrder, configuration/relationTypes/Distribution/attributes/RelationType/attributes/RelationshipOrderRANKVARCHARconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/Rank, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/Rank, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/Rank, configuration/relationTypes/Distribution/attributes/RelationType/attributes/Rank, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RankAMA_HOSPITAL_IDVARCHARconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/AMAHospitalID, configuration/relationTypes/Distribution/attributes/RelationType/attributes/AMAHospitalIDAMA_HOSPITAL_HOURSVARCHARconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/AMAHospitalHours, configuration/relationTypes/Distribution/attributes/RelationType/attributes/AMAHospitalHoursEFFECTIVE_START_DATEDATEconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/Distribution/attributes/RelationType/attributes/EffectiveStartDate, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/EffectiveStartDateEFFECTIVE_END_DATEDATEconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/Distribution/attributes/RelationType/attributes/EffectiveEndDate, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/EffectiveEndDateACTIVE_FLAGBOOLEANconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/Distribution/attributes/RelationType/attributes/ActiveFlag, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/ActiveFlagPRIMARY_AFFILIATIONVARCHARconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/Distribution/attributes/RelationType/attributes/PrimaryAffiliation, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/PrimaryAffiliationAFFILIATION_CONFIDENCE_CODEVARCHARconfiguration/relationTypes/Ownership/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/Purchasing/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/Distribution/attributes/RelationType/attributes/AffiliationConfidenceCode, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/AffiliationConfidenceCodeRELATIONSHIP_GROUP_ACOAFFILIATIONSVARCHARconfiguration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/RelationshipGroupHCPRelationGroupRELATIONSHIP_DESCRIPTION_ACOAFFILIATIONSVARCHARconfiguration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/RelationshipDescriptionHCPRelationshipDescriptionRELATIONSHIP_STATUS_CODEVARCHARconfiguration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/RelationshipStatusCode, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/RelationshipStatusCode, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RelationshipStatusCodeHCPtoHCORelationshipStatusRELATIONSHIP_STATUS_REASON_CODEVARCHARconfiguration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/RelationshipStatusReasonCode, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/RelationshipStatusReasonCode, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RelationshipStatusReasonCodeHCPtoHCORelationshipStatusReasonCodeWORKING_STATUSVARCHARconfiguration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/WorkingStatus, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/WorkingStatus, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/WorkingStatusWorkingStatusRELATIONSHIP_GROUP_HCOSTODDDAFFILIATIONSVARCHARconfiguration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_HCOSTODDDAFFILIATIONSVARCHARconfiguration/relationTypes/HCOStoDDDAffiliations/attributes/RelationType/attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_GROUP_OTHERHCOTOHCOAFFILIATIONSVARCHARconfiguration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_OTHERHCOTOHCOAFFILIATIONSVARCHARconfiguration/relationTypes/OtherHCOtoHCOAffiliations/attributes/RelationType/attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_GROUP_CONTACTAFFILIATIONSVARCHARconfiguration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/RelationshipGroupHCPRelationGroupRELATIONSHIP_DESCRIPTION_CONTACTAFFILIATIONSVARCHARconfiguration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/RelationshipDescriptionHCPRelationshipDescriptionRELATIONSHIP_GROUP_PURCHASINGVARCHARconfiguration/relationTypes/Purchasing/attributes/RelationType/attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_PURCHASINGVARCHARconfiguration/relationTypes/Purchasing/attributes/RelationType/attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_GROUP_DDDTOSAPAFFILIATIONSVARCHARconfiguration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_DDDTOSAPAFFILIATIONSVARCHARconfiguration/relationTypes/DDDtoSAPAffiliations/attributes/RelationType/attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_GROUP_DISTRIBUTIONVARCHARconfiguration/relationTypes/Distribution/attributes/RelationType/attributes/RelationshipGroupHCORelationGroupRELATIONSHIP_DESCRIPTION_DISTRIBUTIONVARCHARconfiguration/relationTypes/Distribution/attributes/RelationType/attributes/RelationshipDescriptionHCORelationDescriptionRELATIONSHIP_GROUP_PROVIDERAFFILIATIONSVARCHARconfiguration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RelationshipGroupHCPRelationGroupRELATIONSHIP_DESCRIPTION_PROVIDERAFFILIATIONSVARCHARconfiguration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RelationshipDescriptionHCPRelationshipDescriptionAFFIL_ACOReltio URI: configuration/relationTypes/Ownership/attributes/ACO, configuration/relationTypes/ACOAffiliations/attributes/ACO, configuration/relationTypes/HCOStoDDDAffiliations/attributes/ACO, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/ACO, configuration/relationTypes/ContactAffiliations/attributes/ACO, configuration/relationTypes/Purchasing/attributes/ACO, configuration/relationTypes/DDDtoSAPAffiliations/attributes/ACO, configuration/relationTypes/Distribution/attributes/ACO, configuration/relationTypes/ProviderAffiliations/attributes/ACOMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameACO_URIVARCHARGenerated KeyRELATION_URIVARCHARReltio Relation URIACO_TYPEVARCHARconfiguration/relationTypes/Ownership/attributes/ACO/attributes/ACOType, configuration/relationTypes/ACOAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/HCOStoDDDAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/ContactAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/Purchasing/attributes/ACO/attributes/ACOType, configuration/relationTypes/DDDtoSAPAffiliations/attributes/ACO/attributes/ACOType, configuration/relationTypes/Distribution/attributes/ACO/attributes/ACOType, configuration/relationTypes/ProviderAffiliations/attributes/ACO/attributes/ACOTypeHCOACOTypeACO_TYPE_CATEGORYVARCHARconfiguration/relationTypes/Ownership/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/ACOAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/HCOStoDDDAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/ContactAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/Purchasing/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/DDDtoSAPAffiliations/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/Distribution/attributes/ACO/attributes/ACOTypeCategory, configuration/relationTypes/ProviderAffiliations/attributes/ACO/attributes/ACOTypeCategoryHCOACOTypeCategoryACO_TYPE_GROUPVARCHARconfiguration/relationTypes/Ownership/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/ACOAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/HCOStoDDDAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/ContactAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/Purchasing/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/DDDtoSAPAffiliations/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/Distribution/attributes/ACO/attributes/ACOTypeGroup, configuration/relationTypes/ProviderAffiliations/attributes/ACO/attributes/ACOTypeGroupHCOACOTypeGroupAFFIL_RELATION_TYPE_ROLEReltio URI: configuration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/Role, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/Role, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/RoleMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameRELATION_TYPE_URIVARCHARGenerated KeyROLE_URIVARCHARGenerated KeyRELATION_URIVARCHARReltio Relation URIROLEVARCHARconfiguration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/Role/attributes/Role, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/Role/attributes/Role, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/Role/attributes/RoleRoleTypeRANKVARCHARconfiguration/relationTypes/ACOAffiliations/attributes/RelationType/attributes/Role/attributes/Rank, configuration/relationTypes/ContactAffiliations/attributes/RelationType/attributes/Role/attributes/Rank, configuration/relationTypes/ProviderAffiliations/attributes/RelationType/attributes/Role/attributes/RankAFFIL_USAGE_TAGReltio URI: configuration/relationTypes/ProviderAffiliations/attributes/UsageTagMaterialized: noColumnTypeDescriptionReltio Attribute URILOV NameUSAGE_TAG_URIVARCHARGenerated KeyRELATION_URIVARCHARReltio Relation URIUSAGE_TAGVARCHARconfiguration/relationTypes/ProviderAffiliations/attributes/UsageTag/attributes/UsageTag"
},
{
"title": "CUSTOMER_SL schema",
"pageID": "163924327",
"pageLink": "/display/GMDM/CUSTOMER_SL+schema",
"content": "The schema plays the role of access layer for clients reading MDM data. It includes a set of views that are directly inherited from CUSTOMER schema.Views have the same structure as views in CUSTOMER schemat. To learn about view definitions please see CUSTOMER schema. In regional data marts, the schema views have MDM prefix. In CUSTOMER_SL schema in Global Data Mart views are prefixed with 'P'  for COMPANY Reltio Model,'I' for IQIVIA Reltio model, and 'P_HI' for Historical Inactive data for COMPANY Reltio Model.To speed up access, most views are being materialized to physical tables. The process is transparent to users. Access views are being switched to physical tables automatically if they are available.  The refresh process is incremental and connected with the loading process. "
},
{
"title": "LANDING schema",
"pageID": "163920137",
"pageLink": "/display/GMDM/LANDING+schema",
"content": "LANDING schema plays a role of the staging database for publishing  MDM data from Reltio tenants throught MDM HUBHUB_KAFKA_DATATarget table for KAFA events published through Snowflake pipe.ColumnTypeDescriptionRECORD_METADATAVARIANTMetadata of KAFKA event like KAFKA key, topic, partition, create timeRECORD_CONTENTVARIANTEvent payloadLOV_DATATarget table for LOV data publish ColumnTypeDescription IDTEXTLOV object idOBJECTVARIANTRelto RDM json objectMERGE_TREE_DATATarget table for merge_tree exports from ReltioColumnTypeDescription FILENAMETEXTFull S3 file pathOBJECTVARIANTRelto MERGE_TREE json objectHI_DATATarget table for ad-hoc historical inactive dataColumnTypeDescription OBJECTVARIANTHistorical Inactive json object"
},
{
"title": "PTE_SL",
"pageID": "302687546",
"pageLink": "/display/GMDM/PTE_SL",
"content": "The schema plays the role of access layer for Clients reading data required for PT&E reports. It mimics its structure and logic. To make a connection to the PTE_SL schema you need to have a proper role assigned:COMM_GBL_MDM_DMART_DEV_PTE_ROLECOMM_GBL_MDM_DMART_QA_PTE_ROLECOMM_GBL_MDM_DMART_STG_PTE_ROLECOMM_GBL_MDM_DMART_PROD_PTE_ROLEthat are connected with groups:sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_DEV_PTE_ROLE\nsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_QA_PTE_ROLE\nsfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_STG_PTE_ROLE\nsfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_PTE_ROLEInformation how to request for an acces is described here: Snowflake - connection guidSnowflake path to the client report: "COMM_GBL_MDM_DMART_PROD_DB"."PTE_SL"."PTE_REPORT"General assumptions for view creation:The views integrate both data models COMPANY and IQIVIA via a Union function. Meaning that they're calculated separately and then joined together. driven_tabel1.iso_code = entity_uri.country The lang_code from the code translations is always 'en'In case the hcp identifiers aren't provided by the client there is an option to calculate them dynamically by the number of HCPs having the identifier.Driven tables:DRIVEN_TABLE1This is a view selecting data from the country_config table for countries that need to be added to the PTE_REPORTColumn nameDescriptionISO_CODEISO2 code of the countryNAMECountry nameLABELCountry label (name + iso_code)RELTIO_TENANTEither 'IQVIA' or the region of the Reltio tenant (EMEA/AMER...)HUB_TENANTIndicator of the HUB database the date comes fromSF_INSTANCEName of the Snowflake instance the data comes from (emeaprod01.eu-west-1...)SF_TENANTDATABASEFull database name form which the data comes fromCUSTOMERSL_PREFIXeither 'i_' for the IQVIA data model or 'p_' for the COMPANY data modelDRIVEN_TABLEV2 / DRIVEN_TABLE2_STATICDRIVEN_TABLEV2 is a view used to get the HCP identifiers and sort them by the count of HCPs that have the identifier. DRIVEN_TABLE2_STATIC is a table containing the list of identifiers used per country and the order in which they're placed in the PTE_REPORT view. If the country isn't available in DRIVEN_TABLE2_STATIC the report will use DRIVEN_TABLEV2 to get them calculated dynamically every time the report is used.Column nameDescriptionISO_CDOEISO2 code of the countryCANONICAL_CODECanonical code of the identifierLANG_DESCCode description in EnglishCODE_IDCode idMODELeither 'i' for the IQVIA data model or 'p' for the COMPANY data modelORDER_IDOrder in which the identifier will be available in the PTE_REPORT view. Only identifiers from 1 to 5 will be used.DRIVEN_TABLE3Specialty dictionary provided by the client for the IQVIA data model only. Used for calculating the is_prescriber data.'IS PRESCRIBER' calculation method for IQIVIA modelThe path to the dictionary files on S3: pfe-baiaes-eu-w1-project/mdm/config/PTE_DictionariesColumn nameDescriptionCOUNTRY_CODEISO2 code of the countryHEADER_NAMECode nameMDM_CODECode idCANONICAL_CODECanonical code of the identifierLONG_DESCRIPTIONCode description in EnglishPROFESSIONAL_TYPEIf the specialty is a prescriber or not PTE_REPORT:The PTE_REPORT is the view from which the clients should get their data. It's an UNION of the reports for the IQVIA data model and the COMPANY data model. Calculation detail may be found in the respective articles:IQVIA: PTE_SL IQVIA MODELCOMPANY: PTE_SL COMPANY MODEL"
},
{
"title": "Data Sourcing",
"pageID": "347664788",
"pageLink": "/display/GMDM/Data+Sourcing",
"content": "CountryIso CodeMDM RegionData ModelSnowflake ViewFranceFREMEACOMPANYPTE_REPORTArgentinaAEGBLIQVIAPTE_REPORTBrazilBRAMERCOMPANYPTE_REPORTMexicoMXGBLIQVIAPTE_REPORTChileCLGBLIQVIAPTE_REPORTColombiaCOGBLIQVIAPTE_REPORTSlovakaSKGBLIQVIAPTE_REPORTPhilippinesPKGBLIQVIAPTE_REPORTRéunionREEMEACOMPANYPTE_REPORTSaint Pierre and MiquelonPMEMEACOMPANYPTE_REPORTMayotteYTEMEACOMPANYPTE_REPORTFrench PolynesiaPFEMEACOMPANYPTE_REPORTFrench GuianaGFEMEACOMPANYPTE_REPORTWallis and FutunaWFEMEACOMPANYPTE_REPORTGuadeloupeGPEMEACOMPANYPTE_REPORTNew CaledoniaNCEMEACOMPANYPTE_REPORTMartiniqueMQEMEACOMPANYPTE_REPORTMauritiusMUEMEACOMPANYPTE_REPORTMonacoMCEMEACOMPANYPTE_REPORTAndorraADEMEACOMPANYPTE_REPORTTurkeyTREMEACOMPANYPTE_REPORT_TRSouth KoreaKRAPACCOMPANYPTE_REPORT_KRAll views are available in the global database in the PTE_SL schema."
},
{
"title": "PTE_SL IQVIA MODEL",
"pageID": "218432348",
"pageLink": "/display/GMDM/PTE_SL+IQVIA+MODEL",
"content": "Iqvia data model specification:name typedescription Reltio attribute URILOV Name additional querry conditions (IQIVIA model)additional querry conditions (COMPANY model)HCP_IDVARCHARReltio Entity URIi_hcp.entity_uri or i_affiliations.start_entity_urionly active hcp are returned (customer_sl.i_hcp.active ='TRUE')i_hcp.entity_uri or i_affiliations.start_entity_urionly active hcp are returnedHCO_IDVARCHARReltio Entity URIFor the IQIVIA model, all affiliation with i_affiliation.active = 'TRUE' and relation type in ('Activity','HasHealthCareRole') must be returned.i_hco.entity_uri select END_ENTITY_URI from customer_sl.i_affiliations where start_entity_uri ='T9u7Ej4'and active = 'TRUE'and relation_type in ('Activity','HasHealthCareRole') ;select * from customer_sl.p_affiliations where active=TRUE and relation_type = 'ContactAffiliations';WORKPLACE_NAMEVARCHARReltio workplace name or reltio workplace parent name.configuration/entityTypes/HCO/attributes/NameFor the IQIVIA model, all affiliation with i_affiliation.active = 'TRUE' and relation type in ('Activity','HasHealthCareRole') must be returned.i_hco.name must be returnedselect hco.name from customer_sl.i_affiliations a,customer_sl.i_hco hcowhere a.end_entity_uri = hco.entity_uri and a.start_entity_uri ='T9u7Ej4'and a.active = 'TRUE'and a.relation_type in ('Activity','HasHealthCareRole') ;For the COMPANY model, all affiliation with p_affiliation.active=TRUE and relation_type = 'ContactAffiliations'i_hco.nameSTATUSBOOLEANReltio Entity statusi_customer_sl.i_hcp.activemapping rule TRUE = ACTIVEi_customer_sl.p_hcp.activemapping rule TRUE = ACTIVELAST_MODIFICATION_DATETIMESAMP_LTZEntity update time in SnowFlakeconfiguration/entityTypes/HCP/updateTimecustomer_sl.i_entity_update_dates.SF_UPDATE_TIMEi_customer_sl.p_entity_update.SF_UPDATE_TIMEFIRST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/FirstNamei_customer_sl.i_hcp.first_namei_customer_sl.p_hcp.first_nameLAST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/LastNamei_customer_sl.i_hcp.last_namei_customer_sl.p_hcp.last_nameTITLE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/TitleLOV Name COMPANY = HCPTitleLOV Name IQIVIA = LKUP_IMS_PROF_TITLEselect  c.canonical_code  from customer_sl.i_hcp hcp,customer_sl.i_codetranslations cwhere hcp.title_lkp = c.code_ide.g.select c.canonical_code fromcustomer_sl.i_hcp hcp,customer_sl.i_code_translations cwherehcp.title_lkp = c.code_idand hcp.entity_uri='T9u7Ej4'and c.country='FR';select c.canonical_code from customer_sl.p_hcp hcp,customer_sl.p_codes cwhere hcp.title_lkp = c.code_idTITLE_DESCVARCHARconfiguration/entityTypes/HCP/attributes/TitleLOV Name COMPANY = THCPTitleLOV Name IQIVIA = LKUP_IMS_PROF_TITLEselect  c.lang_desc  from customer_sl.i_hcp hcp,customer_sl.i_code_translations cwhere hcp.title_lkp = c.code_ide.g.select c.lang_desc fromcustomer_sl.i_hcp hcp,customer_sl.i_code_translations cwherehcp.title_lkp = c.code_idand hcp.entity_uri='T9u7Ej4'and c.country='FR';select c.desc from customer_sl.p_hcp hcp,customer_sl.p_codes cwhere hcp.title_lkp = c.code_idIS_PRESCRIBER'IS PRESCRIBER' calculation method for IQIVIA modelCASEWhen p_hcp.TYPE_CODE_LKP = 'HCPType:HCPT.PRES' then YCASEWhen p_hcp.TYPE_CODE_LKP = 'HCPType:HCPT.NPRS' then NELSETo define                                                COUNTRYCountry codeconfiguration/entityTypes/Location/attributes/countrycustomer_sl.i_hcp.countrycustomer_sl.p_hcp.countryPRIMARY_ADDRESS_LINE_1IQIVIA: configuration/entityTypes/Location/attributes/AddressLine1COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine1select address_line1 from customer_sl.i_address where address_rank=1select address_line1 from customer_sl.i_address where address_rank=1 and entity_uri='T9u7Ej4';select a. address_line1 from customer_sl.p_addresses a where a.address_rank =1PRIMARY_ADDRESS_LINE_2IQIVIA: configuration/entityTypes/Location/attributes/AddressLine2COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine2select address_line2 from customer_sl.i_address where address_rank=1select a. address_line2 from customer_sl.p_addresses a where a.address_rank =1PRIMARY_ADDRESS_CITYIQIVIA: configuration/entityTypes/Location/attributes/CityCOMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/Cityselect cityfrom customer_sl.i_address where address_rank=1select a.city from customer_sl.p_addresses a where a.address_rank =1PRIMARY_ADDRESS_POSTAL_CODEIQIVIA: configuration/entityTypes/Location/attributes/Zip/attributes/ZIP5COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5select ZIP5 from customer_sl.i_address where address_rank=1select a.ZIP5 from customer_sl.p_addresses a where a.address_rank =1PRIMARY_ADDRESS_STATEIQIVIA: configuration/entityTypes/Location/attributes/StateProvinceCOMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/StateProvinceLOV Name COMPANY = Stateselect state_province from customer_sl.i_address where address_rank=1select c.desc fromcustomer_sl.p_codes c,customer_sl.p_addresses awhere a.address_rank=1anda.STATE_PROVINCE_LKP = c.code_id PRIMARY_ADDR_STATUSIQIVIA: configuration/entityTypes/Location/attributes/VerificationStatusCOMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatuscustomer_sl.i_address.verification_statuscustomer_sl.p_addresses.verification_statusPRIMARY_SPECIALTY_CODEconfiguration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyLOV Name COMPANY = HCPSpecialtyLOV Name IQIVIA =LKUP_IMS_SPECIALTYe.g.select c.canonical_code from customer_sl.i_specialities s,customer_sl.i_code_translations cwhere s.specialty_lkp = c.code_idand s.entity_uri ='T9liLpi'and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC' and c.lang_code = 'en'and c.country = 'FR';select c.canonical_code from customer_sl.p_specialities s,customer_sl.p_codes cwhere s.specialty_lkp =c.code_idand s.rank = 1 ;There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the GBL environment that parameter always has a NULL value. PRIMARY_SPECIALTY_DESCconfiguration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyLOV Name COMPANY = LKUP_IMS_SPECIALTYLOV Name IQIVIA =LKUP_IMS_SPECIALTYe.gselect  c.lang_desc from customer_sl.i_specialities s,customer_sl.i_code_translations cwhere s.specialty_lkp = c.code_idand s.entity_uri ='T9liLpi'and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC' and c.lang_code = 'en'and c.country = 'FR';select c.desc from customer_sl.p_specialities s,customer_sl.p_codes cwhere s.specialty_lkp =c.code_idand s.rank = 1 ;There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the GBL environment that parameter always has a NULL value. GO_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/GOStatusgo_status <> ''CASEWhen i_hcp.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:GO' then YesCASEWhen i_hcp.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:NGO' then NoELSENULLgo_status <> ''CASEWhen p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:GO' then YCASEWhen p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:NGO' then NELSE Not defined(now this is an empty tabel)IDENTIFIER1_CODEVARCHARReltio identyfier code.configuration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect ct.canonical_code from customer_sl.i_code_translations ct,customer_sl.i_identifiers dwherect.code_id = d.TYPE_LKPThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the first one.e.g.select ct.canonical_code, ct.lang_desc, d.id, ct.*,d.* from customer_sl.i_code_translations ct,customer_sl.i_identifiers dwherect.code_id = d.TYPE_LKPand d.entity_uri='T9v0e54'andct.lang_code='en'and ct.country ='FR';select ct.canonical_code from customer_sl.p_codes ct,customer_sl.p_identifiers dwherect.code_id = d.TYPE_LKPThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the first one.IDENTIFIER1_CODE_DESCVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect ct.lang_desc from customer_sl.i_code_translations ct,customer_sl.i_identifiers dwherect.code_id = d.TYPE_LKPselect ct.desc from customer_sl.p_codes ct,customer_sl.p_identifiers dwherect.code_id = d.TYPE_LKPIDENTIFIER1_VALUEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/IDselect id from customer_sl.i_identifiers.id select id from customer_sl.p_identifiersIDENTIFIER2_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect ct.canonical_code from customer_sl.i_code_translations ct,customer_sl.i_identifiers dwherect.code_id = d.TYPE_LKPMaximum two identyfiers can be returnedThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the second one.select ct.canonical_code from customer_sl.p_codes ct,customer_sl.p_identifiers dwherect.code_id = d.TYPE_LKPMaximum two identifiers can be returnedThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the second one.IDENTIFIER2_CODE_DESCVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect ct.lang_desc from customer_sl.i_code_translations ct,customer_sl.i_identifiers dwherect.code_id = d.TYPE_LKPselect ct.desc from customer_sl.p_codes ct,customer_sl.p_identifiers dwherect.code_id = d.TYPE_LKPIDENTIFIER2_VALUEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/IDselect i.id from customer_sl.i_identifiers.idselect id from customer_sl.p_identifiersDGSCATEGORYVARCHARIQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategoryCOMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitCategoryLKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCOselect ct.lang_desc from customer_sl.i_code_translations ct,customer_sl.i_disclosure dwherect.code_id = d.dgs_category_lkpselect DisclosureBenefitCategory from p_hcpDGSCATEGORY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategoryLKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCOselect ct.canonical_code from customer_sl.i_code_translations ct,customer_sl.i_disclosure dwherect.code_id = d.dgs_category_lkpcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitCategory DGSTITLEVARCHARIQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitleCOMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitTitleLKUP_BENEFITTITLEselect ct.lang_desc from customer_sl.i_code_translations ct,customer_sl.i_disclosure dwherect.code_id = d.DGS_TITLE_LKPselect DisclosureBenefitTitle from p_hcpDGSTITLE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitleLKUP_BENEFITTITLEselect ct.canonical_code from customer_sl.i_code_translations ct,customer_sl.i_disclosure dwherect.code_id = d.DGS_TITLE_LKPcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitTitle DGSQUALITYVARCHARIQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQualityCOMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitQualityLKUP_BENEFITQUALITYselect ct.lang_desc from customer_sl.i_code_translations ct,customer_sl.i_disclosure dwherect.code_id = d.DGS_QUALITY_LKPselect DisclosureBenefitQuality from p_hcpDGSQUALITY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQualityLKUP_BENEFITQUALITYselect ct.canonical_code from customer_sl.i_code_translations ct,customer_sl.i_disclosure dwherect.code_id = d.DGS_QUALITY_LKPcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitQuality DGSSPECIALTYVARCHARIQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialtyCOMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitSpecialtyLKUP_BENEFITSPECIALTYselect ct.lang_desc from customer_sl.i_code_translations ct,customer_sl.i_disclosure dwherect.code_id = d.DGS_SPECIALTY_LKPDisclosureBenefitSpecialtyDGSSPECIALTY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialtyLKUP_BENEFITSPECIALTYselect canonical_code from customer_sl.i_code_translations ct,customer_sl.i_disclosure dwherect.code_id = d.DGS_SPECIALTY_LKPcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitSpecialtySECONDARY_SPECIALTY_DESCVARCHARA query should return values like:select c.LANG_DESC from "COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_SPECIALITIES" s,"COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_CODE_TRANSLATIONS" cwhere s.SPECIALTY_LKP =c.CODE_IDand s.RANK=2and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC'and c.LANG_CODE ='en' ← lang code conditionand c.country ='PH' ← country conditionand s.ENTITY_URI ='ENTITI_URI'; ← entity uri conditionEMAILVARCHARA query should return values like:select EMAIL from "COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_EMAIL" where rank= 1 and entity_uri ='ENTITI_URI';  ← entity uri conditionCAUTION: In case when multiple values are returned, the first one must be returned as a query result.PHONEVARCHARA query should return values like:select FORMATTED_NUMBER from "COMM_GBL_MDM_DMART_PROD_DB"."CUSTOMER_SL"."I_PHONE" where RANK=1 and entity_uri ='ENTITI_URI'; ← entity uri conditionCAUTION: In case when multiple values are returned, the first one must be returned as a query result."
},
{
"title": "'IS PRESCRIBER' calculation method for IQIVIA model",
"pageID": "218434836",
"pageLink": "/display/GMDM/%27IS+PRESCRIBER%27+calculation+method+for+IQIVIA+model",
"content": "Parameters contains in SF model:SF xml parameter name in calculation metode.g. value from SF modelcustomer_sl.i_hcp.type_code_lkp hcp.professional_type_cdi_hcp.type_code_lkp LKUP_IMS_HCP_CUST_TYPE:PRESselect c.canonical_code from customer_sl.i_hcp s,customer_sl.i_codes cwheres.SUB_TYPE_CODE_LKP = c.code_id hcp.professional_subtype_cdprof_subtype_codeWFR.TYP.Iselect c.canonical_code from customer_sl.i_specialities s,customer_sl.i_codes cwheres.specialty_lkp = c.code_id and s.rank=1 and s.SPECIALTY_TYPE_LKP='LKUP_IMS_SPECIALTY_TYPE:SPEC' and c.parents='SPEC'spec.specialty_codespec_codeWFR.SP.IEcustomer_sl.i_hcp.countryhcp.countryi_hcp.countryFRDictionaries parameters:profesion_type_subtype.csv as dict_subtypesprofesion_type_subtype_fr.csv as dict_subtypesprofessions_type_subtype.xlsxxmlvalue from file to calculate SF viewe.g. value to calculate SF viewmdm_codedict_subtypes.mdm_codecanonical_codeWAR.TYP.Aprofessional_typedict_subtypes.professional_typeprofessional_typeNon-Prescriber, Prescribercountry_codedict_subtypes.country_codecountry_codeFRprofesion_type_speciality.csv as dict_specialtiesprofesion_type_speciality_fr.csv as dict_specialtiesprofessions_type_subtype.xlsxxmlvalue from file to calculate SF viewe.g. value to calculate SF viewmdm_codedict_subtypes.mdm_codecanonical_codeWAC.SP.24professional_typedict_subtypes.professional_typeprofessional_typeNon-Prescriber, Prescribercountry_codedict_subtypes.country_codecountry_codeFRIn a new PTE_SL view the files mentions above are migrated to driven_tabel3. So in a method description, there is an extra condition that matches a dependence with profession subtype or specialty.Method description:Query condition: driven_tabel3.country_code = i_hcp.country and driven_tabel3.canonical_code = prof_subtype_code and driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE'driven_tabel3.country_code = i_hcp.country and driven_tabel3.canonical_code = spec_code and driven_tabel3.header_name='LKUP_IMS_SPECIALTY'CASE         WHEN i_hcp.type_code_lkp ='LKUP_IMS_HCP_CUST_TYPE:PRES' THEN 'Y'         WHEN    coalesce(prof_subtype_code,spec_code,'') = '' THEN 'N'         WHEN    coalesce(prof_subtype_code,'') <> '' THEN                    CASE                             WHEN coalesce(driven_tabel3.canonical_code,'') = '' THEN 'N@1'                             - for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition                             WHEN coalesce(driven_tabel3.canonical_code,'') <> '' THEN                                      - for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition                                        CASE                                                 WHEN driven_tabel3.professional_type = 'Prescriber' THEN 'Y'              - for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition                                                 WHEN driven_tabel3.professional_type = 'Non-Prescriber' THEN 'N'     - for driven_tabel3.header_name = 'LKUP_IMS_HCP_SUBTYPE', this is a profession subtype checking condition                                                 ELSE 'N@2'                                        END                     END          WHEN    coalesce(spec_code,'') <> '' THEN                     CASE                              WHEN coalesce(driven_tabel3.canonical_code,'') = '' THEN 'N@3'                                - for driven_tabel3.header_name = 'LKUP_IMS_SPECIALTY', this is a specialty checking condition                              WHEN coalesce(driven_tabel3.canonical_code,'') <> '' THEN                                        - for driven_tabel3.header_name = 'LKUP_IMS_SPECIALTY', this is a specialty checking condition                                         CASE                                                  WHEN driven_tabel3.professional_type = 'Prescriber' THEN 'Y'                 - for driven_tabel3.header_name = 'LKUP_IMS_SPECIALTY', this is a specialty checking condition                                                  WHEN driven_tabel3.professional_type = 'Non-Prescriber' THEN 'N'        - for driven_tabel3.header_name = 'LKUP_IMS_SPECIALTY', this is a specialty checking condition                                                  ELSE 'N@4'                                          END                     END           ELSE 'N@99'END AS IS_PRESCRIBER"
},
{
"title": "PTE_SL COMPANY MODEL",
"pageID": "234711638",
"pageLink": "/display/GMDM/PTE_SL+COMPANY+MODEL",
"content": "COMPANY data model specification:name typedescription Reltio attribute URILOV Name additional querry conditions (COMPANY model)HCP_IDVARCHARReltio Entity URIi_hcp.entity_uri or i_affiliations.start_entity_urionly active hcp are returned (customer_sl.i_hcp.active ='TRUE')HCO_IDVARCHARReltio Entity URISELECT HCO.ENTITY_URIFROM CUSTOMER_SL.P_HCP HCPINNER JOIN CUSTOMER_SL.P_AFFILIATIONS AF    ON HCP.ENTITY_URI= AF.START_ENTITY_URIINNER JOIN CUSTOMER_SL.P_HCO HCO    ON AF.END_ENTITY_URI = HCO.ENTITY_URIWHERE AF.relation_type = 'ContactAffiliations'AND AF.ACTIVE = 'TRUE';TO - DO An additional conditions that should be included:querry need to return only HCP-HCO pairs for witch "P_AFFIL_RELATION_TYPE.RELATIONSHIPDESCRIPTION_LKP" = 'HCPRelationshipDescription:CON' A Pair HCP plus HCO must be uniqe.WORKPLACE_NAMEVARCHARReltio workplace name or reltio workplace parent name.configuration/entityTypes/HCO/attributes/NameSELECT HCO.NAMEFROM CUSTOMER_SL.P_HCP HCPINNER JOIN CUSTOMER_SL.P_AFFILIATIONS AF    ON HCP.ENTITY_URI= AF.START_ENTITY_URIINNER JOIN CUSTOMER_SL.P_HCO HCO    ON AF.END_ENTITY_URI = HCO.ENTITY_URIWHERE AF.relation_type = 'ContactAffiliations'AND AF.ACTIVE = 'TRUE';A Pair HCP plus HCO must be uniqe.STATUSBOOLEANReltio Entity statusi_customer_sl.p_hcp.activemapping rule TRUE = ACTIVELAST_MODIFICATION_DATETIMESAMP_LTZEntity update time in SnowFlakeconfiguration/entityTypes/HCP/updateTimep_entity_update.SF_UPDATE_TIMEFIRST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/FirstNamei_customer_sl.p_hcp.first_nameLAST_NAMEVARCHARconfiguration/entityTypes/HCP/attributes/LastNamei_customer_sl.p_hcp.last_nameTITLE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/TitleLOV Name COMPANY = HCPTitleLOV Name IQIVIA = LKUP_IMS_PROF_TITLEselect c.canonical_code from customer_sl.p_hcp hcp,customer_sl.p_codes cwhere hcp.title_lkp = c.code_idTITLE_DESCVARCHARconfiguration/entityTypes/HCP/attributes/TitleLOV Name COMPANY = THCPTitleLOV Name IQIVIA = LKUP_IMS_PROF_TITLEselect c.desc from customer_sl.p_hcp hcp,customer_sl.p_codes cwhere hcp.title_lkp = c.code_idIS_PRESCRIBERCASEWhen p_hcp.TYPE_CODE_LKP = 'HCPType:HCPT.PRES' then YCASEWhen p_hcp.TYPE_CODE_LKP = 'HCPType:HCPT.NPRS' then NELSETo define                                                COUNTRYCountry codeconfiguration/entityTypes/Location/attributes/countrycustomer_sl.p_hcp.countryPRIMARY_ADDRESS_LINE_1IQIVIA: configuration/entityTypes/Location/attributes/AddressLine1COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine1select a. address_line1 from customer_sl.p_addresses a where a.address_rank =1PRIMARY_ADDRESS_LINE_2IQIVIA: configuration/entityTypes/Location/attributes/AddressLine2COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/AddressLine2select a. address_line2 from customer_sl.p_addresses a where a.address_rank =1PRIMARY_ADDRESS_CITYIQIVIA: configuration/entityTypes/Location/attributes/CityCOMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/Cityselect a.city from customer_sl.p_addresses a where a.address_rank =1PRIMARY_ADDRESS_POSTAL_CODEIQIVIA: configuration/entityTypes/Location/attributes/Zip/attributes/ZIP5COMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5select a.ZIP5 from customer_sl.p_addresses a where a.address_rank =1PRIMARY_ADDRESS_STATEIQIVIA: configuration/entityTypes/Location/attributes/StateProvinceCOMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/StateProvinceLOV Name COMPANY = Stateselect c.desc fromcustomer_sl.p_codes c,customer_sl.p_addresses awhere a.address_rank=1anda.STATE_PROVINCE_LKP = c.code_id PRIMARY_ADDR_STATUSIQIVIA: configuration/entityTypes/Location/attributes/VerificationStatusCOMPANY: configuration/entityTypes/HCP/attributes/Addresses/attributes/VerificationStatuscustomer_sl.p_addresses.verification_statusPRIMARY_SPECIALTY_CODEconfiguration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyLOV Name COMPANY = HCPSpecialtyLOV Name IQIVIA =LKUP_IMS_SPECIALTYselect c.canonical_code from customer_sl.p_specialities s,customer_sl.p_codes cwhere s.specialty_lkp =c.code_idand s.rank = 1 ;There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the GBL environment that parameter always has a NULL value. PRIMARY_SPECIALTY_DESCconfiguration/entityTypes/HCO/attributes/Specialities/attributes/SpecialtyLOV Name COMPANY = LKUP_IMS_SPECIALTYLOV Name IQIVIA =LKUP_IMS_SPECIALTYselect c.desc from customer_sl.p_specialities s,customer_sl.p_codes cwhere s.specialty_lkp =c.code_idand s.rank = 1 ;There are no extra query conditions connected with SPECIALTY_TYPE_LKP because in the GBL environment that parameter always has a NULL value. GO_STATUSVARCHARconfiguration/entityTypes/HCP/attributes/Compliance/attributes/GOStatusgo_status <> ''CASEWhen p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:GO' then YCASEWhen p_compliance.go_status_lkp = 'LKUP_GOVOFF_GOSTATUS:NGO' then NELSE Not defined(now this is an empty tabel)IDENTIFIER1_CODEVARCHARReltio identyfier code.configuration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect ct.canonical_code from customer_sl.p_codes ct,customer_sl.p_identifiers dwherect.code_id = d.TYPE_LKPThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the first one.IDENTIFIER1_CODE_DESCVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect ct.desc from customer_sl.p_codes ct,customer_sl.p_identifiers dwherect.code_id = d.TYPE_LKPIDENTIFIER1_VALUEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/IDselect id from customer_sl.p_identifiersIDENTIFIER2_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect ct.canonical_code from customer_sl.p_codes ct,customer_sl.p_identifiers dwherect.code_id = d.TYPE_LKPMaximum two identifiers can be returnedThere is a need to set steering parameters that match country code with proper code identifiers - according to driven_tabel2 describes below. This is a place for the second one.IDENTIFIER2_CODE_DESCVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/Typeselect ct.desc from customer_sl.p_codes ct,customer_sl.p_identifiers dwherect.code_id = d.TYPE_LKPIDENTIFIER2_VALUEVARCHARconfiguration/entityTypes/HCP/attributes/Identifiers/attributes/IDselect id from customer_sl.p_identifiersDGSCATEGORYVARCHARIQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategoryCOMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitCategoryLKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCOselect DisclosureBenefitCategory from p_hcpDGSCATEGORY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSCategoryLKUP_BENEFITCATEGORY_HCP,LKUP_BENEFITCATEGORY_HCOcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitCategory DGSTITLEVARCHARIQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitleCOMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitTitleLKUP_BENEFITTITLEselect DisclosureBenefitTitle from p_hcpDGSTITLE_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSTitleLKUP_BENEFITTITLEcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitTitle DGSQUALITYVARCHARIQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQualityCOMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitQualityLKUP_BENEFITQUALITYselect DisclosureBenefitQuality from p_hcpDGSQUALITY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSQualityLKUP_BENEFITQUALITYcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitQuality DGSSPECIALTYVARCHARIQIVIA: configuration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialtyCOMPANY: configuration/entityTypes/HCP/attributes/DisclosureBenefitSpecialtyLKUP_BENEFITSPECIALTYDisclosureBenefitSpecialtyDGSSPECIALTY_CODEVARCHARconfiguration/entityTypes/HCP/attributes/Disclosure/attributes/DGSSpecialtyLKUP_BENEFITSPECIALTYcomment: select i_code.canonical_code for a valu returned from DisclosureBenefitSpecialtySECONDARY_SPECIALTY_DESCVARCHAREMAILVARCHARPHONEVARCHAR"
},
{
"title": "Global Data Mart",
"pageID": "196886082",
"pageLink": "/display/GMDM/Global+Data+Mart",
"content": "The section describes the structure of  MDM GLOBAL Data Mart in Snowflake. The GLOBAL Data Mart contains consolidated data from multiple regional data marts.Databases:The Global MDM Data mart connects all markets using Snowflake DB Replication (if in the different zone) or Local DB (if in the same zone)<ENV>: DEV/QA/STG/PRODMDM_REGIONMDM Region detailsSnowflake  InstanceSnowflake DB nameTypeModelEMEAlinkhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EMEA_MDM_DMART_<ENV>_DBlocalP / P_HIAMERlinkhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.comhttps://amerprod01.us-east-1.privatelink.snowflakecomputing.comCOMM_AMER_MDM_DMART_<ENV>_DBreplicaP / P_HIUSlinkhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.comhttps://amerprod01.us-east-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_<ENV>replicaP / P_HIAPAClinkhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.comCOMM_APAC_MDM_DMART_<ENV>_DBlocalP / P_HIEUlinkhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EU_MDM_DMART_<ENV>_DBlocalIConsolidated GLOBAL Schema:The COMM_GBL_MDM_DMART_<ENV>_DB database includes the following schema:CUSTOMER - main schema containing consolidated views for all COMPANY models.CUSTOMER_SL - access schema for users containing a set of views accessing CUSTOMER schema objectsP_ - COMPANY Reltio Model and are prefixed with 'P'P_HI - COMPANY Reltio Model with Historical Inactive onekey crosswalksI_  - Ex-US data are in the IQIVIA Reltio model and are prefixed with 'I'AES_RS_SL - schema containing views that mimic Redshift data mart.User accessing the CUSTOMER_SL schema can query across all markets, having in mind the following details:P_ prefixed viewsP_HI prefixed viewsI_ prefixed viewsConsolidated view from all markets that are from "P" Model.The first column in each view is the MDM_REGION representing the information about the connection of the specific row to the market. Each market may contain a different number of columns and also some columns that exist in one market may not be available in the other. The Consolidated views aggregate all columns from all markets.Corresponding data model: Dynamic views for COMPANY MDM ModelConsolidated view from all markets that are from "P_HI" Model.The first column in each view is the MDM_REGION representing the information about the connection of the specific row to the market. Each market may contain a different number of columns and also some columns that exist in one market may not be available in the other. The Consolidated views aggregate all columns from all markets.View build based on the Legacy IQVIA Reltio Model, from EU market that is using "I" Model"Corresponding data model: Dynamic views for IQIVIA MDM ModelGLOBALInstance detailsENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh timeDEVhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_DEV_DBEMEA + AMER + US+ APAC + EUonce per dayQAhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_QA_DBEMEA + AMER + US+ APAC + EUonce per daySTGhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_STG_DBEMEA + AMER + US+ APAC + EUonce per dayPRODhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_PROD_DBEMEA + AMER + US+ APAC + EUevery 2hRolesNPROD<ENV> = DEV/QA/STGRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxPTE_SLWarehouseAD Group NameCOMM_GBL_MDM_DMART_<ENV>_DEVOPS_ROLEFullFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_DEVOPS_ROLECOMM_GBL_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_MTCH_AFFIL_ROLECOMM_GBL_MDM_DMART_<ENV>_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_METRIC_ROLECOMM_GBL_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_MDM_ROLECOMM_GBL_MDM_DMART_<ENV>_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_READ_ROLECOMM_GBL_MDM_DMART_<ENV>_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_DATA_ROLECOMM_GBL_MDM_DMART_<ENV>_PTE_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_GBL_MDM_DMART_<ENV>_PTE_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxPTE_SLWarehouseAD Group NameCOMM_GBL_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_DEVOPS_ROLECOMM_GBL_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PRD_MTCHAFFIL_ROLECOMM_GBL_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_METRIC_ROLECOMM_GBL_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_MDM_ROLECOMM_GBL_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_READ_ROLECOMM_GBL_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_DATA_ROLECOMM_GBL_MDM_DMART_PROD_PTE_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_GBL_MDM_DMART_PROD_PTE_ROLE"
},
{
"title": "Global Data Materialization Process",
"pageID": "356800042",
"pageLink": "/display/GMDM/Global+Data+Materialization+Process",
"content": ""
},
{
"title": "Regional Data Marts",
"pageID": "196886987",
"pageLink": "/display/GMDM/Regional+Data+Marts",
"content": "The regional data mart is presenting MDM data from one region.  Data are loaded from one selected Reltio instance. They are being refreshed more frequently than the global mart. They are a good choice for clients operating in local markets.EMEAInstance detailsENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh timeDEVhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EMEA_MDM_DMART_DEV_DBwn60kG248ziQSMWevery day between 2 am - 4 am ESTQAhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EMEA_MDM_DMART_QA_DBvke5zyYwTifyeJSevery day between 2 am - 4 am ESTSTGhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EMEA_MDM_DMART_STG_DBDzueqzlld107BVWevery day between 2 am - 4 am EST *Due to many projects running on the environment the refresh time has been temporarily changed to "every 2 hours" for the client's convenience.PRODhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/COMM_EMEA_MDM_DMART_PROD_DBXy67R0nDA10RUV6every 2 hoursRolesNPROD<ENV> = DEV/QA/STGRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_EMEA_MDM_DMART_<ENV>_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_DEVOPS_ROLECOMM_EMEA_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_MTCH_AFFIL_ROLECOMM_EMEA_MDM_DMART_<ENV>_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_METRIC_ROLECOMM_EMEA_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_MDM_ROLECOMM_EMEA_MDM_DMART_<ENV>_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_READ_ROLECOMM_EMEA_MDM_DMART_<ENV>_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EMEA_MDM_DMART_<ENV>_DATA_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_EMEA_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_DEVOPS_ROLECOMM_EMEA_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PRD_MTCHAFFIL_ROLECOMM_EMEA_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_METRIC_ROLECOMM_EMEA_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_MDM_ROLECOMM_EMEA_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_READ_ROLECOMM_EMEA_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EMEA_MDM_DMART_PROD_DATA_ROLEAMERInstance detailsENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh timeDEVhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/COMM_AMER_MDM_DMART_DEV_DBwJmSQ8GWI8Q6Fl1every day between 2 am - 4 am ESTQAhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/COMM_AMER_MDM_DMART_QA_DB805QOf1Xnm96SPjevery day between 2 am - 4 am ESTSTGhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.com/COMM_AMER_MDM_DMART_STG_DBK7I3W3xjg98Dy30every day between 2 am - 4 am ESTPRODhttps://amerprod01.us-east-1.privatelink.snowflakecomputing.comCOMM_AMER_MDM_DMART_PROD_DBYs7joaPjhr9DwBJevery 2 hoursRolesNPROD<ENV> = DEV/QA/STGRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_AMER_MDM_DMART_<ENV>_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_DEVOPS_ROLECOMM_AMER_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_MTCH_AFFIL_ROLECOMM_AMER_MDM_DMART_<ENV>_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_METRIC_ROLECOMM_AMER_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_MDM_ROLECOMM_AMER_MDM_DMART_<ENV>_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_READ_ROLECOMM_AMER_MDM_DMART_<ENV>_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_AMER_MDM_DMART_<ENV>_DATA_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_AMER_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_DEVOPS_ROLECOMM_AMER_MDM_DMART_PROD_MTCH_AFFIL_RORead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_MTCH_AFFIL_ROCOMM_AMER_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_METRIC_ROLECOMM_AMER_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_MDM_ROLECOMM_AMER_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_READ_ROLECOMM_AMER_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_AMER_MDM_DMART_PROD_DATA_ROLEUSInstance detailsENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh timeDEVhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_DEVsw8BkTZqjzGr7hnevery day between 2 am - 4 am ESTQAhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_QArEAXRHas2ovllvTevery day between 2 am - 4 am ESTSTGhttps://amerdev01.us-east-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_STG48ElTIteZz05XwTevery day between 2 am - 4 am ESTPRODhttps://amerprod01.us-east-1.privatelink.snowflakecomputing.comCOMM_GBL_MDM_DMART_PROD9kL30u7lFoDHp6Xevery 2 hoursRolesNPROD<ENV> = DEV/QA/STGRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_<ENV>_MDM_DMART_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_DEVOPS_ROLECOMM_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_MTCH_AFFIL_ROLECOMM_<ENV>_MDM_DMART_ANALYSIS_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-Onlysfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_ANALYSIS_ROLECOMM_<ENV>_MDM_DMART_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_METRIC_ROLECOMM_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_MDM_ROLECOMM_<ENV>_MDM_DMART_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_READ_ROLECOMM_MDM_DMART_<ENV>_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerdev01_COMM_<ENV>_MDM_DMART_DATA_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_PROD_MDM_DMART_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_DEVOPS_ROLECOMM_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_MTCH_AFFIL_ROLECOMM_PROD_MDM_DMART_ANALYSIS_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_ANALYSIS_ROLECOMM_PROD_MDM_DMART_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_METRIC_ROLECOMM_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_MDM_ROLECOMM_PROD_MDM_DMART_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_READ_ROLECOMM_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_us-east-1_amerprod01_COMM_PROD_MDM_DMART_DATA_ROLEAPACInstance detailsENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh timeDEVhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_APAC_MDM_DMART_DEV_DBw2NBAwv1z2AvlkgSevery day between 2 am - 4 am ESTQAhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_APAC_MDM_DMART_QA_DBxs4oRCXpCKewNDKevery day between 2 am - 4 am ESTSTGhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_APAC_MDM_DMART_STG_DBY4StMNK3b0AGDf6every day between 2 am - 4 am ESTPRODhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/COMM_APAC_MDM_DMART_PROD_DBsew6PfkTtSZhLdWevery 2 hoursRolesNPROD<ENV> = DEV/QA/STGRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_APAC_MDM_DMART_<ENV>_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_DEVOPS_ROLECOMM_APAC_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_MTCH_AFFIL_ROLECOMM_APAC_MDM_DMART_<ENV>_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_METRIC_ROLECOMM_APAC_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_MDM_ROLECOMM_APAC_MDM_DMART_<ENV>_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_READ_ROLECOMM_APAC_MDM_DMART_<ENV>_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_APAC_MDM_DMART_<ENV>_DATA_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_APAC_MDM_DMART_PROD_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_DEVOPS_ROLECOMM_APAC_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PRD_MTCHAFFIL_ROLECOMM_APAC_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_METRIC_ROLECOMM_APAC_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_MDM_ROLECOMM_APAC_MDM_DMART_PROD_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_READ_ROLECOMM_APAC_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_APAC_MDM_DMART_PROD_DATA_ROLEEU (ex-us)Instance detailsENVSnowflake InstanceSnowflake DB NameReltio TenantRefresh timeDEVhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EU_MDM_DMART_DEV_DBFLy4mo0XAh0YEbNevery day between 2 am - 4 am ESTQAhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EU_MDM_DMART_QA_DBAwFwKWinxbarC0Zevery day between 2 am - 4 am ESTSTGhttps://emeadev01.eu-west-1.privatelink.snowflakecomputing.comCOMM_EU_MDM_DMART_STG_DBFW4YTaNQTJEcN2gevery day between 2 am - 4 am ESTPRODhttps://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com/COMM_EU_MDM_DMART_PROD_DBFW2ZTF8K3JpdfFlevery 2 hoursRolesNPROD<ENV> = DEV/QA/STGRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_<ENV>_MDM_DMART_OPS_ROLEDEVFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeadev01_COMM_<ENV>_MDM_DMART_DEVOPS_ROLECOMM_MDM_DMART_<ENV>_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_<ENV>_MDM_DMART_MTCH_AFFIL_ROLECOMM_EU_<ENV>_MDM_DMART_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_EU_<ENV>_MDM_DMART_METRIC_ROLECOMM_MDM_DMART_<ENV>_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_<ENV>_MDM_DMART_MDM_ROLECOMM_EU_MDM_DMART_<ENV>_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_<ENV>_MDM_DMART_READ_ROLECOMM_MDM_DMART_<ENV>_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeadev01_COMM_<ENV>_MDM_DMART_DATA_ROLEPRODRole NameLandingCustomerCustomer SLAES RS SLAccount MappingMetricsSandboxWarehouseAD Group NameCOMM_PROD_MDM_DMART_DEVOPS_ROLEFullFullFullFullFullFullFullCOMM_MDM_DMART_WH(S)COMM_MDM_DMART_M_WH(M)COMM_MDM_DMART_L_WH(L)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_DEVOPS_ROLECOMM_MDM_DMART_PROD_MTCH_AFFIL_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyFullRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_MTCH_AFFIL_ROLECOMM_EU_MDM_DMART_PROD_METRIC_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_EU_PROD_MDM_DMART_METRIC_ROLECOMM_MDM_DMART_PROD_MDM_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyFullCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_MDM_ROLECOMM_PROD_MDM_DMART_READ_ROLERead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_READ_ROLECOMM_MDM_DMART_PROD_DATA_ROLERead-OnlyRead-OnlyCOMM_MDM_DMART_WH(S)sfdb_eu-west-1_emeaprod01_COMM_PROD_MDM_DMART_DATA_ROLE"
},
{
"title": "MDM Admin Management API",
"pageID": "294663752",
"pageLink": "/display/GMDM/MDM+Admin+Management+API",
"content": ""
},
{
"title": "Description",
"pageID": "294663759",
"pageLink": "/display/GMDM/Description",
"content": "MDM Admin is a management API, automating numerous repeatable tasks and enabling the end user to perform them, without the need to make a request and wait for one of MDM Hub's engineers to pick it up.At its current state, MDM Hub provides below services:Modify Kafka offsetGenerate outbound eventsReconcile an entity/relation (only used by MDM Hub Ops Team)Each functionality is described in detail in the following chapters.API URL listTenantEnvironmentMDM Admin API Base URLSwagger URL - API DocumentationGBL (EX-US)DEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gbl-dev/swagger-ui/index.html QAhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-qa/https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gbl-qa/swagger-ui/index.html STAGEhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-stage/https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gbl-stage/swagger-ui/index.html PRODhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-prod/https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gbl-prod/swagger-ui/index.html GBLUSDEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-dev/https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gblus-dev/swagger-ui/index.html QAhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-qa/https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gblus-qa/swagger-ui/index.html STAGEhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-stage/https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gblus-stage/swagger-ui/index.html PRODhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-prod/https://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-gblus-prod/swagger-ui/index.html EMEADEVhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html QAhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-qa/https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-qa/swagger-ui/index.html STAGEhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-stage/https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-stage/swagger-ui/index.html PRODhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-admin-emea-prod/https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-prod/swagger-ui/index.html AMERDEVhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-amer-dev/https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-amer-dev/swagger-ui/index.html QAhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-amer-qa/https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-amer-qa/swagger-ui/index.html STAGEhttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-amer-stage/https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-amer-stage/swagger-ui/index.html PRODhttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-admin-amer-prod/https://api-amer-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-amer-prod/swagger-ui/index.html APACDEVhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-apac-dev/https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-dev/swagger-ui/index.html QAhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-apac-qa/https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-qa/swagger-ui/index.html STAGEhttps://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-apac-stage/https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-stage/swagger-ui/index.html PRODhttps://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-admin-apac-prod/https://api-apac-prod-gbl-mdm-hub.COMPANY.com/api-admin-spec-apac-prod/swagger-ui/index.html Modify Kafka offsetIf you are consuming from MDM Hub's outbound topic, you can now modify the offsets to skip/re-send messages. Please refer to the Swagger Documentation for additional details.Example 1Environment is EMEA DEV. User wants to consume the last 100 messages from his topic again. He is using topic "emea-dev-out-full-test-topic-1" and consumer-group "emea-dev-consumergroup-1"Steps:Disable the consumer. Kafka will not allow offset manipulation, if the topic/consumergroup is being usedSend below request:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/kafka/offset\n{\n "topic": "emea-dev-out-full-test-topic-1", \n "groupId": "emea-dev-consumergroup-1",\n "shiftBy": -100\n}\nEnable the consumer. Last 100 events will be re-consumed.Example 2User wants to consume all available messages from the topic again.Steps:Disable the consumer. Kafka will not allow offset manipulation, if the topic/consumergroup is being used.Send below request:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/kafka/offset\n{\n "topic": "emea-dev-out-full-test-topic-1", \n "groupId": "emea-dev-consumergroup-1",\n "offset": earliest\n}\nEnable the consumer. All events from the topic will be available for consumption again.Resend EventsAllows re-sending events to MDM Hub's outbound Kafka topics, with filtering by Entity Type (entity or relation), modification date, country and source. Please refer to the Swagger Documentation for more details. Example use scenario is described below.Generated events are filtered by the topic routing rule (by country, event type etc.). Generating events for some country may not result in anything being produced on the topic, if this country is not added to the filter.Before starting a Resend Events job, please make sure that the country is already added to the routing rule. Otherwise, request additional country to be added (TODO: link to the instruction).ExampleFor development purposes, user needs to generate 10k of events to his "emea-dev-out-full-test-topic-1" topic for the new market - Belgium (BE).Steps:Send below request:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend\n{\n "countries": [\n "be"\n ],\n "objectType": "ENTITY",\n "limit": 10000,\n "reconciliationTarget": "emea-dev-out-full-test-topic-1"\n}\nA process will start on MDM Hub's side, generating events on this topic. Response to the request will contain the process ID (dag_run_id):\n{\n "dag_id": "reconciliation_system_amer_dev",\n "dag_run_id": "manual__2022-11-30T14:12:07.780320+00:00",\n "execution_date": "2022-11-30T14:12:07.780320+00:00",\n "state": "queued"\n}\nYou can check the status of this process by sending below request:\nGET https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend/status/manual__2022-11-30T14:12:07.780320+00:00\nResponse:\n{\n "dag_id": "reconciliation_system_amer_dev",\n "dag_run_id": "manual__2022-11-30T14:12:07.780320+00:00",\n "execution_date": "2022-11-30T14:12:07.780320+00:00",\n "state": "started"\n}\nOnce the process is completed, all the requested events will have been sent to the topic."
},
{
"title": "Requesting Access",
"pageID": "294663762",
"pageLink": "/display/GMDM/Requesting+Access",
"content": "Access to MDM Admin Management API should be requested via email sent to MDM Hub's DL: DL-ATP_MDMHUB_SUPPORT@COMPANY.com.Below chapters contain required details and email templates.Modify Kafka OffsetRequired details:Team name (including Person of Contact)List of topicsList of consumergroupsUsername (already used for Kafka, API etc.)Email template:\nHi Team,\n\nPlease provide us with access to the MDM Admin API. Details below:\n\nAPI: Kafka Offset\nTeam name: MDM Hub\nTopics:\n - emea-dev-out-full-test-topic\n - emea-qa-out-full-test-topic \n - emea-stage-out-full-test-topic \nConsumergroups: \n - emea-dev-hub \n - emea-qa-hub \n - emea-stage-hub \nUsername: mdm-hub-user\n\nBest Regards,\nPiotr\nResend EventsRequired details:Team name (including Person of Contact)List of topicsUsername (already used for Kafka, API etc.)Email template:\nHi Team,\n\nPlease provide us with access to the MDM Admin API. Details below:\n\nAPI: Resend Events\nTeam name: MDM Hub\nTopics: \n - emea-dev-out-full-test-topic\nUsername: mdm-hub-user\n\nBest Regards,\nPiotr\n"
},
{
"title": "Flows",
"pageID": "164470069",
"pageLink": "/display/GMDM/Flows",
"content": ""
},
{
"title": "Batch clear ETL data load cache",
"pageID": "333154693",
"pageLink": "/display/GMDM/Batch+clear+ETL+data+load+cache",
"content": "DescriptionThis is the batch operation to clear batch cache. The process was design to clear mongo cache (removes records from batchEntityProcessStatus) for specified batch name, sourceId type and value. This process is an adapter to the /batchController/{batchName}/_clearCache operation exposed by mdmhub batch service that allows user to clear cache.Link to clear batch cache by crosswalk documentation exposed by Batch Service Clear Cache by croswalksLink to HUB UI documentation: HUB UI User Guide Flow: The client delivers file including the list of source types and values to be cleared by HUB. File is uploaded to S3 resource by MDM HUB UI.The clear batch process is triggered by MDM HUB Admin service.The process parses the input files and calls Batch Service API to clear cache.File load through UI details:MAX SizeMax file size is 128MBHow to prepare the file to avoid unexpected errors:File format descriptionFile needs to be encoded with UTF-8 without bom.Input fileFile format: CSV Encoding: UTF-8EOL: UnixHow to setup this using Notepad++:Set encoding:Set EOL to Unix:Check (bottom right corner):Column headers:SourceType - source crosswalk type that describes entitySourceValue - source crosswalk value that describes entityInput file example123SourceType;SourceValueReltio;upIP01WSAP;3000201428clear_cache_ex.csvInternalsAirflow process name: clear_batch_service_cache_{{ env }}"
},
{
"title": "Batch merge & unmerge",
"pageID": "164470091",
"pageLink": "/pages/viewpage.action?pageId=164470091",
"content": "DescriptionThis is the batch operation to merge/unmerge entities in Reltio. The process was designed to execute the force merge operation between Reltio objects. In Reltio, there are merge rules that automatically merge objects, but the user may explicitly define the merge between objects. This process is the adapter to the _merge or _unmerge operation that allows the user to specify the CSV file with multi entries so there is no need to execute API multiple times.  Flow: The client delivers files including the list of merge/unmerge operations to be executed by HUB. Files must be placed in S3 resource controlled by MDM HUB either by a client or MDM HUB support via HUB UI. The batch process is triggered by Airflow directly or by HUB UIThe process parses the input files and calls Reltio API to merge or unmerge entities.The result of the process is the report file generated and published to S3File load through UI details:MAX SizeMax file size is 128MB or 10k recordsHow to prepare the file to avoid unexpected errors:File format descriptionFile needs to be encoded with UTF-8 without bom. Merge operation Input fileFile format: CSV Encoding: UTF-8EOL: UnixHow to setup this using Notepad++:Set encoding:Set EOL to Unix:Check (bottom right corner):File name format: merge_YYYYMMDD.csvDrop location: DEV: s3://pfe-baiaes-eu-w1-nprod-project/mdm/DEV/merge_unmerge_entities/input/STAGE: s3://pfe-baiaes-eu-w1-nprod-project/mdm/STAGE/merge_unmerge_entities/input/PROD: Column headers:The column names are kept for backward compatibility. The winner of the merge is always the entity that was created earlier. There is currently no possibility to select an explicit winner via the merge_unmerge batch.WinnerSourceName - source name of the source entity: the survivor of the merge operation or the entity that will be splitWinnerId - id of the source entity: the survivor of the merge operation or the entity that will be splitLoserSourceName - source name of the target entity: the looser of the merge operation LoserId - id of the target entity: the loser of the merge operation In the output file there are two additional fields:responseStatus - the response statusresponseErrorMessage - the error messageMerge input file example\nWinnerSourceName;WinnerId;LoserSourceName;LoserId\nRELTIO;15hgDlsd;RELTIO;1JRPpffH\nRELTI;15hgDlsd;RELTIO;1JRPpffH\nOutput fileFile format: CSV Encoding: UTF-8File name format: status_merge_YYYYMMDD_<seqNr>.csv  <seqNr> - the number of the file process in the current day. Starting with 1 to n. Drop location: DEV: s3://pfe-baiaes-eu-w1-nprod-project/mdm/DEV/merge_unmerge_entities/output/YYYYMMDD_hhmmss/STAGE: s3://pfe-baiaes-eu-w1-nprod-project/mdm/DEV/merge_unmerge_entities/output/YYYYMMDD_hhmmss/PROD: Column headers:sourceId.type - source name of the source entity: the survivor of the merge operation or the entity that will be splitsourceId.value - id of the source entity: the survivor of the merge operation or the entity that will be splittedstatus - the response statuserrorCode - the error codeerrorMessage - the error meesageMerge output file example\nsourceId.type,sourceId.value,status,errorCode,errorMessage\nmerge_RELTIO_RELTIO,0009e93_00Ff82E,updated,,\nmerge_GRV_GRV,6422af22f7c95392db313216_23f45427-8cdc-43e6-9aea-0896d4cae5f8,updated,,\nmerge_RELTI_RELTIO,15hgDlsd_1JRPpffH,notFound,EntityNotFoundByCrosswalk,Entity not found by crosswalk in getEntityByCrosswalk [Type:RELTI Value:15hgDlsd]\nUnmerge operation Input fileFile format: CSV Encoding: UTF-8File name format: unmerge_YYYYMMDD_<seqNr>.csv  <seqNr> - the number of the file process in the current day. Starting with 1 to n. Drop location: DEV: s3://pfe-baiaes-eu-w1-nprod-project/mdm/DEV/merge_unmerge_entities/input/STAGE: s3://pfe-baiaes-eu-w1-nprod-project/mdm/STAGE/merge_unmerge_entities/input/Column headers:SourceURI - uri of the source entityTargetURI - uri of the extracted entityUnmerge input file example\nSourceURI;TargetURI\n15hgG6nP;15hgG6nQ1\n15hgG6qc;15hgG6rq\nOutput fileFile format: CSV Encoding: UTF-8File name format: status_umerge_YYYYMMDD_<seqNr>.csv  <seqNr> - the number of the file process in the current day. Starting with 1 to n. Column headers:SourceURI - uri of the source entityTargetURI - uri of the extracted entityresponseStatus - the response statusresponseErrorMessage - the error messageUnmerge output file example\nsourceId.type,sourceId.value,status,errorCode,errorMessage\nunmerge_RELTIO_RELTIO,01lAEll_01jIfxx,updated,,\nunmerge_RELTIO_RELTIO,0144V4D_01EFVyb,updated,,\nInternalsAirflow process name: merge_unmerge_entities"
},
{
"title": "Batch reload MapChannel data",
"pageID": "407896553",
"pageLink": "/display/GMDM/Batch+reload+MapChannel+data",
"content": "DescriptionThis process is used to reload source data from GCP/GRV systems. The user has two ways to indicate the data he wants to reload:CSV file - contains lines with entity uri or crosswalk valuesQuery mongo - only entities meeting the criteria will be reloadedIn process Airflow Dag is used to control the flow  Flow: The client delivers files including the list of entity uris/crosswalk values. Files must be placed in S3 resource controlled by MDM HUB either by a client via HUB UI or MDM HUB support.The Airflow Dag is triggered:The process parses the input and query mongo for selected entitiesFor each entity - sending events to raw GCP/GRV input topicsThe result of the process is the report file generated and published to S3File load through UI details:MAX SizeMax file size is 128MBInput file examplereload_map_channel_data.csv Output fileFile format: CSV Encoding: UTF-8File name format: report__reload_map_channel_data_YYYYMMDD_<seqNr>.csv  <seqNr> - the number of the file process in the current day. Starting with 1 to n. Column headers: TODOOutput file example TODOSourceCrosswalkType,SourceCrosswalkValue,IdentifierType,IdentifierValue,status,errorCode,errorMessageReltio,upIP01W,HCOIT.PFORCERX,TEST9_OEG_1000005218888,failed,404,Can't find entity for target: EntityURITargetObjectId(entityURI=entities/upIP01W)SAP,3000201428,HCOIT.SAP,3000201428,failed,CrosswalkNotFoundException,Entity not found by crosswalk in getEntityByCrosswalk [Type:SAP Value:3000201428]InternalsAirflow process name: reload_map_channel_data_{{ env }}"
},
{
"title": "Batch Reltio Reindex",
"pageID": "337846347",
"pageLink": "/display/GMDM/Batch+Reltio+Reindex",
"content": "DescriptionThis is the operation to execute Reltio Reindex API. The process was designed to get the input CSV file with entities URIS and schedule the Reltio Reindex API. More details about the Reltio API is available here: 5. Reltio ReindexHUB wraps the Entity URIs and schedules Reltio Task.  Flow: The client delivers files including the list of entity uris. The file is uploaded to the S3 resource by MDM HUB UI.The Reltio Reindex process is triggered by the MDM HUB Admin service.The process parses the input files and calls Reltio API.File load through UI details:MAX SizeMax file size is 128MB. The user should be able to load around 7.4M entity uris lines in one file to fit into a 128MB file size. Please check the file size before uploading. Larger files will be rejected.Please be aware that 128MB file upload may take a few minutes depending on the user network performance. Please wait until processing is finished and the response appears.How to prepare the file to avoid unexpected errors:File format descriptionFile needs to be encoded with UTF-8 without bom.Input fileFile format: CSV Encoding: UTF-8EOL: UnixHow to setup this using Notepad++:Set encoding:Set EOL to Unix:Check (bottom right corner):Column headers:N/A - do not add headersInput file example123entities/E0pV5Xmentities/1CsgdXN4entities/2O5RmRireltio_reindex.csvInternalsAirflow process name: reindex_entities_mdm_{{ env }}"
},
{
"title": "Batch update identifiers",
"pageID": "234704200",
"pageLink": "/display/GMDM/Batch+update+identifiers",
"content": "DescriptionThis is the batch operation to update identifiers in Reltio. The process was design to update selected identifiers selected by identifier lookup code. This process is an adapter to the /entities/_updateAttributes operation exposed by mdmhub manager service that allows user to modify nested attributes using specific filters.Source for the batch process is csv in which one row corresponds with single identifiers that should be changed.In process batch service is used to control the flow  Flow: The client delivers files including the list of identifiers that should be updated. Files must be placed in S3 resource controlled by MDM HUB either by a client via HUB UI or MDM HUB support.The batch process is triggered by Airflow manually or scheduled wayThe process parses the input files and calls Reltio API to update identifiersThe result of the process is the report file generated and published to S3File load through UI details:MAX SizeMax file size is 128MB or 10k recordsHow to prepare the file to avoid unexpected errors:File format descriptionFile needs to be encoded with UTF-8 without bom. Input fileFile format: CSV Encoding: UTF-8EOL: UnixHow to setup this using Notepad++:Set encoding:Set EOL to Unix:Check (bottom right corner):File name format: update_identifiers_YYYYMMDD_<seqNr>.csv  <seqNr> - the number of the file process in the current day. Starting with 1 to n. Drop location: GBL:DEV: s3://pfe-atp-eu-w1-nprod-mdmhub/gbl/dev/inbound/update_identifiersSTAGE: s3://pfe-atp-eu-w1-nprod-mdmhub/gbl/stage/inbound/update_identifiersPROD: s3://pfe-baiaes-eu-w1-project/mdm/inbound/update_identifiersEMEA:DEV: s3://pfe-atp-eu-w1-nprod-mdmhub/emea/dev/inbound/update_identifiersQA: s3://pfe-atp-eu-w1-nprod-mdmhub/emea/qa/inbound/update_identifiersSTAGE: s3://pfe-atp-eu-w1-nprod-mdmhub/emea/stage/inbound/update_identifiersPROD: s3://pfe-atp-eu-w1-prod-mdmhub/emea/prod/inbound/update_identifiersColumn headers:SourceCrosswalkType - source crosswalk type that describes entity. If you use "Reltio" then you should use entity uri in SourceCrosswalkValue column. For every other crosswalk type use SourceCrosswalkValue - source crosswalk value that describes entityIdentifierType - identifier type that you want to modifyIdentifierValue - identifier values that you want to set(update/insert/merge). More information in /entities/_updateAttributes documentationIdentifierTrust - trust flag for given identifier, accepted values: Yes, No and <empty string>. In case of <empty string>, default value No for AMER, APAC, EMEA and null for GBL will be set.IdentifierSourceName - source name of updated identifier. In case of <empty string>, default value HUB_ID for AMER, APAC, EMEA and null for GBL will be set.Action - action you want to perform on attribute. More information in /entities/_updateAttributes documentationdelete - IGNORE_ATTRIBUTE - IdentifierType has to exists - if it does not exists do not delete and share the information in the "details" attribute that the target key does not exist This operation works like DELETE FROM Identifiers WHERE key=(key)update - UPDATE_ATTRIBUTE - IdentifierType have to exists - if it does not exist return share the information in the "details" attribute that the target key does not exist   This operation works like UPDATE Identifiers SET (set) WHERE key=(key)Only allows updating existing attributes ( for example if the  ID  does not exist in the target - do not update this Identifier and share the information in the details that "ID" does not exist in the target)insert - INSERT_ATTRIBUTE  only allows to insert new attributes, if the "set" exists in the target return the information in the "details" element that such object already exists  This operation work like INSERT INTO Identifiers values (set)      Adds only a new element to the target array.merge - (insert or update) (similar to "update" but it makes an insert if "set" elements do not exist in target) - update attributes matched by the key or inserts a new one. If there are multiple keys related to one filter, it updates all matches or inserts a new one. In this case, we are checking the target array. For example, we matched multiple target Identifiers by the "key" and we want to "set" the "ID". If the target identifier does not have the "ID" we are making an INSERT_ATTRIBUTE, if the target attribute contains the "ID" we are making the UPDATE_ATTRIBUTEreplace -(delete or insert) - delete (IGNORE_ATTRIBUTE) attributes matched by the "key" and insert the new one.This operation works in a way that it will delete all target attributes matched by the "key" and put only one new Identifier in that place. For example, we had 3 Identifiers in the target matching by the "key". Replace will cause that now in the target we have 1 new Identifier. 3 old ones are removed (IGNORE_ATTRIBUTE) and a new one is inserted (INSERT_ATTRIBUTE).TargetCrosswalkType - HUB_ID is a default source that updates the data in Reltio - N/A - keep empty and add just this header.Input file example123SourceCrosswalkType;SourceCrosswalkValue;IdentifierType;IdentifierValue;IdentifierTrust;IdentifierSourceName;Action;TargetCrosswalkTypeReltio;upIP01W;HCOIT.PFORCERX;TEST9_OEG_1000005218888;;;update;SAP;3000201428;HCOIT.SAP;3000201428;Yes;SAP;update;update_identifier_20220323.csvOutput fileFile format: CSV Encoding: UTF-8File name format: report__update_identifiers_YYYYMMDD_<seqNr>.csv  <seqNr> - the number of the file process in the current day. Starting with 1 to n. Column headers:SourceCrosswalkType - source crosswalk type that describes entity. If you use "Reltio" then you should use entity uri in SourceCrosswalkValue column. For every other crosswalk type use SourceCrosswalkValue - source crosswalk value that describes entityIdentifierType - identifier type that you want to modifyIdentifierValue - identifier values that you want to set(update/insert/merge). More information in /entities/_updateAttributes documentationstatus- the response statuserrorCode - the error codeerrorMessage- the error messageOutput file example\nSourceCrosswalkType,SourceCrosswalkValue,IdentifierType,IdentifierValue,status,errorCode,errorMessage\nReltio,upIP01W,HCOIT.PFORCERX,TEST9_OEG_1000005218888,failed,404,Can't find entity for target: EntityURITargetObjectId(entityURI=entities/upIP01W)\nSAP,3000201428,HCOIT.SAP,3000201428,failed,CrosswalkNotFoundException,Entity not found by crosswalk in getEntityByCrosswalk [Type:SAP Value:3000201428]\nInternalsAirflow process name: update_identifiers_{{ env }}"
},
{
"title": "Callbacks",
"pageID": "164469861",
"pageLink": "/display/GMDM/Callbacks",
"content": "DescriptionThe HUB Callbacks are divided into the following two sections:PreCallback process is responsible for the Ranking of the selected attributes RankSorters. This callback is based on the full enriched events from the "${env}-internal-reltio-full-events". Only events that do not require additional ranking updates in Reltio are published to the next processing stage. Some rankings calculations - like OtherHCOtoHCO is delayed and processed in PreDylayCallbackService - such functionality was required to gather all changes for relations in time windows and send events to Reltio only after the aggregation window is closed. This limits the number of events and updates to Reltio. OtherHCOtoHCOAffiliations Rankings - more details related to the OtherHCOtoHCO relation ranking with all PreDylayCallbackService  and DelayRankActivationProcessorrank details OtherHCOtoHCOAffiliations RankSorter"Post" Callback process is responsible for the specific logic and is based on the events published by the Event Publisher component. Here are the processes executed in the post callback process:AttributeSetter Callback - based on the "{env}-internal--callback-attributes-setter-in" events. Sets additional attributes for EMEA COMPANY France market  e.g. ComplianceMAPPHCPStatusCrosswalkActivator Callback  - based on the "${env}-internal-callback-activator-in" events. Activates selected crosswalk or soft-delete specific crosswalks based on the configuration. CrosswalkCleaner Callback - based on the "${env}-internal-callback-cleaner-in" events. Cleans orphan HUB_Callback crosswalk or soft-delete specific crosswalks based on the configuration. CrosswalkCleanerWithDelay Callback - based on the "${env}-internal-callback-cleaner-with-delay-in" events. Cleans orphan HUB_Callback crosswalk or soft-delete specific crosswalks based on the configuration with delay (aggregate events in time window)DanglingAffiliations Callback - based on the "${env}-internal-callback-orphan-clean-in" events. Removes orphan affiliations once one of the start or end objects was removed. Derived Addresses Callback  - based on the "${env}-internal-callback-derived-addresses-in" events. Rewrites an Address from HCO to HCP, connected to each other with some type of Relationship. used on IQVIA tenantHCONames Callback for IQVIA model - based on the "${env}-internal-callback-hconame-in" events. Caclucate HCO Names. HCONames Callback for COMPANY model -  based on the "${env}-internal-callback-hconame-in" events. Caclucate HCO Names in COMPANY Model.NotMatch Callback - based on the "${env}-internal-callback-potential-match-cleaner-in" events. Based on the created relationships between two matched objects, removes the match using _notMatch operation. More details about the HUB callbacks are described in the sub-pages. Flow diagram"
},
{
"title": "AttributeSetter Callback",
"pageID": "250150261",
"pageLink": "/display/GMDM/AttributeSetter+Callback",
"content": "DescriptionCallback auto-fills configured static Attributes, as long as the profile's attribute values meet the requirements. If no requirement (rule) is met, an optional cleaner deletes the existing, Hub-provided value for this attribute. AttributeSetter uses Manager's Update Attributes async interface.Flow DiagramStepsAfter event has been routed from EventPublisher, check the following:Entity must be active and have at least one active crosswalk Event Type must match configured allowedEventTypesCountry must match configured allowedCountriesFor each configured setAttribute do the following:Check if the entityType matches For each rules do the following:Check if criteria are metIf criteria are met:Check if Hub crosswalk already provides the AutoFill value (either Attribute's value or lookupCode must match)If attribute value is already present, do nothingIf attribute is not present:Add inserting AutoFill attribute to the list of changesCheck if Hub crosswalk provides another value for this attributeIf Hub crosswalk provides another value, add deleting that attribute value to the list of changesIf no rules were matched for this setAttribute and cleaner is enabled:Find the Hub-provided value of this attribute and add deleting this value to the list of changes (if exists)Map the list of changes into a single AttributeUpdateRequest object and send to Manager inbound topic.ConfigurationExample AttributeSetter rule (multiple allowed):\n - setAttribute: "ComplianceMAPPHCPStatus"\n entityType: "HCP"\n cleanerEnabled: true\n rules:\n - name: "AutoFill HCPMHS.Non-HCP IF SubTypeCode = Administrator (HCPST.A) / Researcher/Scientist (HCPST.C) / Counselor/Social Worker (HCPST.CO) / Technician/Technologist (HCPST.TC)"\n setValue: "HCPMHS.Non-HCP"\n where:\n - attribute: "SubTypeCode"\n values: [ "HCPST.A", "HCPST.C", "HCPST.CO", "HCPST.TC" ]\n\n - name: "AutoFill HCPMHS.Non-HCP IF SubTypeCode = Allied Health Professionals (HCPST.R) AND PrimarySpecialty = Psychology (SP.PSY)"\n setValue: "HCPMHS.Non-HCP"\n where:\n - attribute: "SubTypeCode"\n values: [ "HCPST.R" ]\n - attribute: "Specialities"\n nested:\n - attribute: "Primary"\n values: [ "true" ]\n - attribute: "Specialty"\n values: [ "SP.PSY" ]\n\n - name: "AutoFill HCPMHS.HCP for all others"\n setValue: "HCPMHS.HCP"\nRule inserts ComplianceMAPPHCPStatus attribute for every HCP:"HCPMHS.Non-HCP" for every profile having SubTypeCode in [ "HCPST.A", "HCPST.C", "HCPST.CO", "HCPST.TC" ]"HCPMHS.Non-HCP" for every profile having SubTypeCode == "HCPST.R" where one of Specialities == "SP.PSY" and has Primary flag"HCPMHS.HCP" in all other scenariosDependent ComponentsComponentUsageCallback ServiceMain component with flow implementationPublisherGeneration of incoming eventsManagerAsynchronous processing of generated AttributeUpdateRequest events"
},
{
"title": "CrosswalkActivator Callback",
"pageID": "302701827",
"pageLink": "/display/GMDM/CrosswalkActivator+Callback",
"content": "DescriptionCrosswalkActivator is the opposite of CrosswalkCleaner. There are 4 main processing branches (described in more detail in the "Algorithm" section):WhenOneKeyExistsAndActive - activate all crosswalks having:crosswalk type as in the configuration,crosswalk value same as an existing, active Onekey crosswalk in this profile.WhenAnyOneKeyExistsAndActive - activate all crosswalks of types same as in configuration, as long as there is at least one active Onekey crosswalk present in this profile.WhenAnyCrosswalksExistsAndActive - activate all crosswalks of types same as in configuration, as long as there is at least one active crosswalk present in this profile (crosswalk types in the except section of configuration are not considered as active crosswalks).ActivateOneKeyReferbackCrosswalkWhenRelatedOneKeyCrosswalkExistsAndActive - activate OneKey referback crosswalk (with lookupCode in configuration), as long as there is at least one active Onekey crosswalk present in this profileAlgorithmFor each event from ${env}-internal-callback-activator-in topic, do:filter by event country (configured),filter by event type (configured, usually only CHANGED events),Processing: WhenOneKeyExistsAndActivefind all active Onekey crosswalks (exact Onekey source name is fetched from configuration)for each crosswalk in the input event entity do:if crosswalk type is in the configured list (getWhenOneKeyExistsAndActive) and crosswalk value is the same as one of active Onekey crosswalks, send activator request to Manager,activator request contains entityType,activated crosswalk with empty string ("") in deleteDate,Country attribute rewritten from the input event,Manager processes the request as partialOverride.Processing: WhenAnyOneKeyExistsAndActivefind all active Onekey crosswalks (exact Onekey source name is fetched from configuration)for each crosswalk in the input event entity do:if crosswalk type is in the configured list (getWhenAnyOneKeyExistsAndActive) and active Onekey crosswalks list is not empty, send activator request to Manager,activator request contains entityType,activated crosswalk with empty string ("") in deleteDate,Country attribute rewritten from the input event,Manager processes the request as partialOverride.Processing: WhenAnyCrosswalksExistsAndActivefind all active crosswalks (sources in the configuration except list are filtered out)for each crosswalk in the input event entity do:if crosswalk type is in the configured list (getWhenAnyCrosswalksExistsAndActive) and active Onekey crosswalks list is not empty, send activator request to Manager,activator request contains entityType,activated crosswalk with empty string ("") in deleteDate,Country attribute rewritten from the input event,Manager processes the request as partialOverride.Processing: ActivateOneKeyReferbackCrosswalkWhenRelatedOneKeyCrosswalkExistsAndActivefind all OneKey crosswalks,check for active OneKey crosswalk with lookupCode included in the configured list oneKeyLookupCodes,check for related inactive OneKey referback crosswalk with lookupCode included in the configured list referbackLookupCodes,if above conditions are met, send activator request to Manager,activator request contains:entityType,activated OneKey referback crosswalk with empty string ("") in deleteDate,Country attribute rewritten from the input event,Manager processes the request as partialOverride.Dependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherRoutes incoming eventsManagerAsync processing of generated activator requests"
},
{
"title": "CrosswalkCleaner Callback",
"pageID": "164469744",
"pageLink": "/display/GMDM/CrosswalkCleaner+Callback",
"content": "DescriptionThis process removes using the hard delete or soft-delete operation crosswalks on Entity or Relation objects. There are the following sections in this process.Hard Delete Crosswalks - EntitiesBased on the input configuration removes the crosswalk from Reltio once all other crosswalks were removed or inactivated.  Once the source decides to inactivated the crosswalk, associated attributes are removed from the Golden Profile (OV), and in that case Rank attributes delivered by the HUB have to be removed. The process is used to remove orphan HUB_CALLBACK crosswalks that are used in the PreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType) processHard Delete Crosswalks - RelationshipsThis is similar to the above. The only difference here is that the PreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType) process is adding new Rank attributes to the relationship between two objects. Once the relationship is deactivated by the Source, the orphan HUB_CALLBACK crosswalk is removed. Soft Delete Crosswalks This process does not remove the crosswalk from Reltio. It updates the existing providing additional deleteDate attribute on the soft-deleting crosswalk. In that case in Reltio the corresponding crosswalk becomes inactive. There are three types of soft-deletes:always - soft-delete crosswalks based on the configuration once all other crosswalks are removed or inactivated,whenOneKeyNotExists - soft-delete crosswalks based on the configuration once ONEKEY crosswalk is removed or inactivated. This process is similar to the "always" process by the activation is only based on the ONEKEY crosswalk inactivation,softDeleteOneKeyReferbackCrosswalkWhenOneKeyCrosswalkIsInactive - soft-delete ONEKEY referback crosswalk (lookupCode in configuration) once ONEKEY crosswalk is inactivated.Flow diagramStepsEvent publisher publishes full events to ${env}-internal-callback-cleaner-in including 'HCO_CHANGED', 'HCP_CHANGED', 'MCO_CHANGED', 'RELATIONSHIP_CHANGED' eventsOnly events with the correct event type are processed.Then the checks are activated checking if it is possible to: hard delete entity crosswalkshard delete relationship crosswalkssoft delete crosswalksIt is possible that for one event multiple checks are going to be activated, in that case, multiple output events will be generated. Once the criteria are successfully fulfilled, the events are generated to the "${env}-internal-async-all-cleaner-callbacks" topic to the next processing step in the Manager component. TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-service:CrosswalkCleanerStream (callback package)Process events and calculate hard or soft-delete requests and publish to the next processing stage. realtime - events streamDependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerAsynchronous process of generated events"
},
{
"title": "CrosswalkCleanerWithDelay Callback",
"pageID": "302701874",
"pageLink": "/display/GMDM/CrosswalkCleanerWithDelay+Callback",
"content": "DescriptionCrosswalkCleanerWithDelay works similarly to CrosswalkCleaner. It is using the same Kafka Streams topology, but events are trimmed (eliminateNeedlessData parameter - all the fields other than crosswalks are removed), and, which is most important, deduplication window is added.Deduplication window's parameters are configured, there are no default parameters. EMEA PROD example:8 hour window (Callback Service's config: callback.crosswalkCleanerWithDelay.deduplication.duration)1 hour ping interval (Callback Service's config: callback.crosswalkCleanerWithDelay.deduplication.pingInterval)This means, that the delay is equal to 8-9 hours.AlgorithmFor more details on algorithm steps, see CrosswalkCleaner Callback.DependenciesComponentUsageCallback ServiceMain component with flow implementationPublisherRoutes incoming eventsManagerAsync processing of generated requests"
},
{
"title": "DanglingAffiliations Callback",
"pageID": "164469754",
"pageLink": "/display/GMDM/DanglingAffiliations+Callback",
"content": "DescriptionDanglingAffiliation Callback consists of two sub-processes:DanglingAffiliations Based On Inactive Objects (legacy)DanglingAffiliations Based On Same Start And End Objects (added in August 2023)"
},
{
"title": "DanglingAffiliations Based On Inactive Objects",
"pageID": "347635836",
"pageLink": "/display/GMDM/DanglingAffiliations+Based+On+Inactive+Objects",
"content": "DescriptionThe process soft-deletes active relationships between inactivated start or end objects. Based on the configuration only REMOVED or INACTIVATE events are processed. It means that once the Start or End objects becomes inactive process checks the orphan relationship and sends the soft-delete request to the next processing stage. Flow diagramStepsEvent publisher publishes full events to ${env}-internal-callback-orphanClean-in including 'HCP_REMOVED', 'HCO_REMOVED', 'MCO_REMOVED', 'HCP_INACTIVATED', 'HCO_INACTIVATED', 'MCO_INACTIVATED' eventsOnly events with the correct event type are processed.In the next step, the Relationship is retrieved from the HUB by StartObjectURI or EndObjectURI.Once the relationship exists and is ACTIVE the Soft-Delete Request is generated to the "${env}-internal-async-all-cleaner-callbacks" topic to the next processing step in the Manager component. TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-service:DanglingAffiliationsStream (callback package)Process events for inactive entities and calculate soft-delete requests and publish to the next processing stage. realtime - events streamDependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerAsynchronous process of generated eventsHub StoreRelationship Cache"
},
{
"title": "DanglingAffiliations Based On Same Start And End Objects",
"pageID": "347635839",
"pageLink": "/display/GMDM/DanglingAffiliations+Based+On+Same+Start+And+End+Objects",
"content": "DescriptionThis process soft-deletes looping relations - active relations having the same startObject and endObject.Such loops can be created in one of two ways:merge-on-the-fly of two entities,manual merge of two entitiesboth of these create a RELATIONSHIP_CHANGED event, so the process is based off of RELATIONSHIP_CREATED and RELATIONSHIP_CHANGED events.Unlike the other DanglingAffiliations sub-process, this one does not query the cache for relations, because all the required information is in the processed event.Flow diagramStepsEvent publisher publishes full events to ${env}-internal-callback-orphanClean-in including RELATIONSHIP_CREATED and RELATIONSHIP_CHANGED eventsOnly events with the correct event type are processed.If there is a country list configured, the event country is also checked before processing.Current state of relation in the event is checked for the following:is startObject.objectURI the same as endObject.objectURI?is relation active (no endDate is set)?does the relation type match the configured list of relationTypes (only if configured list is not empty)?If all of the above are true, a soft-delete request is generated to the ${env}-internal-async-all-cleaner-callbacks topic to the next processing step in the Manager component. TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-service:DanglingAffiliationsStream (callback package)Process events for relations and calculate soft-delete requests and publish to the next processing stage. realtime - events streamDependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerAsynchronous process of generated events"
},
{
"title": "Derived Addresses Callback",
"pageID": "294677441",
"pageLink": "/display/GMDM/Derived+Addresses+Callback",
"content": "DescriptionThe Callback is a tool for rewriting an Address from HCO to HCP, connected to each other with some type of Relationship.Sequence DiagramFlowProcess is a callback. It operates on four Kafka topics:${env}-internal-callback-derived-addresses-in input topic, containing simple events:HCP_CREATEDHCP_CHANGEDHCO_CREATEDHCO_CHANGEDHCO_REMOVEDHCO_INACTIVATEDRELATIONSHIP_CREATEDRELATIONSHIP_CHANGEDRELATIONSHIP_REMOVED${env}-internal-callback-derived-addresses-hcp4calc internal topic, containing HCP URIs${env}- internal-derived-addresses-hcp-create Manager bundle topic, processes Addresses sent${env}-internal-async-all-cleaner-callbacks Manager async topic, cleans orphaned crosswalksStepsAlgorithm has 3 stages: Stage I Event PublisherEvent Publisher routes all above event types to ${env}-internal-callback-derived-addresses-in topic, optional filtering by country/source. Stage II Callback Service Preprocessing StageIf event subType ~ HCP_*:pass targetEntity URI to ${env}-internal-callback-derived-addresses-hcp4calcIf event subtype ~ HCO_*:Find all ACTIVE relations of types ${walkRelationType} ending at this HCO in entityRelations collection.Extract URIs of all HCPs at starts of these relations and send them to topic ${env}-internal-callback-derived-addresses-hcp4calcIf event subtype ~ RELATIONSHIP_*:Find the relation by URI in entityRelations collection.Check if relation type matches the configured ${walkRelationType}Extract URI of the startObject (HCP) and send it to the topic ${env}-internal-callback-derived-addresses-hcp4calc Stage III Callback Service Main StageInput is HCP URI.Find HCP by URI in entityHistory collection. Check:If we cannot find entity in entityHistory, log error and skipIf found entity has other type than “configuration/entityTypes/HCP”, log error and skipIf entity has status LOST_MERGE/DELETED/INACTIVE, skipIn entityHistory, find all relations of types ${walkRelationType} starting at this HCP, extract HCO at the end of relationFor each extracted HCO (Hospital) do:Find HCO in entityHistory collectionWrap HCO Addresses in a Create HCP Request:Rewrite all sub-attributes from each ov==true Hospitals AddressAdd attributes from ${staticAddedFields}, according to strategy: overwrite or underwrite (add if missing)Add the required Country attribute (rewrite from HCP)Add two crosswalks:Data provider ${hubCrosswalk} with value: ${hcpId}_${hcoId}.Contributor provider Reltio type with HCP uri.Send Create HPC Request to Manager through bundle topicIf HCP has a crosswalk of type and sourceTable as below:type: ${hubCrosswalk.type}sourceTable: ${hubCrosswalk.sourceTable}value: ${hcpId}_${hcoId}but its hcoUri suffix does not match any Hospital found, send request to delete the crosswalk to MDM Manager.ConfigurationFollowing configurations have to be made (examples are for GBL tenants).Callback ServiceAdd and handle following section to CallbackService application.yml in GBL:\ncallback:\n...\n derivedAddresses:\n enabled: true\n walkRelationType: \n - configuration/relationTypes/HasHealthCareRole\n hubCrosswalk:\n type: HUB_Callback\n sourceTable: DerivedAddresses\n staticAddedFields:\n - attributeName: AddressType\n attributeValue: TYS.P\n strategy: over\n inputTopic: ${env}-internal-callback-derived-addresses-in\n hcp4calcTopic: ${env}-internal-callback-derived-addresses-hcp4calc\n outputTopic: ${env}-internal-derived-addresses-hcp-create\n cleanerTopic: ${env}-internal-async-all-cleaner-callbacks\nSince we are adding a new crosswalk, cleaning of which will be handled by the Derived Addresses callback itself, we should exclude this crosswalk from the Crosswalk Cleaner config (similar to HcoNames one):\ncallback:\n crosswalkCleaner:\n ...\n hardDeleteCrosswalkTypes:\n ...\n exclude:\n - type: configuration/sources/HUB_Callback\n sourceTable: DerivedAddresses\nManagerAdd below to the MDM Manager bundle config:\nbundle:\n...\n inputs:\n...\n - topic: "${env}-internal-derived-addresses-hcp-create"\n username: "mdm_callback_service_user"\n defaultOperation: hcp-create\nCheck DQ Rules configuration.If there are any rules that may reject the HUB_Callback/DerivedAddresses HCP Create, an exception should be made. Example: Validation Status is required.If Address refEntity is configured to be surrogate, add an exception and new rule, adding MD5 crosswalk to the Address:\n- name: generate address relation and refEnity crosswalk\n preconditions:\n - type: sourceAndSourceTable\n values:\n - source: HUB_Callback\n sourceTable: "DerivedAddresses"\n action:\n type: addressDigest\n value: MD5\n skipRefEntityCreation: false\n skipRefRelationCreation: false\n\n- name: Make surrogate crosswalk on address\n preconditions:\n - type: not\n preconditions:\n - type: sourceAndSourceTable\n values:\n - source: HUB_Callback\n sourceTable: "DerivedAddresses"\n action:\n type: addressCrosswalkValue\n value: surrogate\nEvent PublisherRouting rule has to be added:\n- id: derived_addresses_callback\n destination: "${env}-internal-derived-addresses-in"\n selector: "(exchange.in.headers.reconciliationTarget==null)\n && exchange.in.headers.eventType in ['simple']\n && exchange.in.headers.country in ['cn']\n && exchange.in.headers.eventSubtype in ['HCP_CREATED', 'HCP_CHANGED', 'HCO_CREATED', 'HCO_CHANGED', 'HCO_REMOVED', 'HCO_INACTIVATED', 'RELATIONSHIP_CREATED', 'RELATIONSHIP_CHANGED', 'RELATIONSHIP_REMOVED']"\nDependent ComponentsComponentUsageCallback ServiceMain component with flow implementationManagerProcessing HCP Create, Crosswalk Delete operationsEvent PublisherGeneration of incoming events"
},
{
"title": "HCONames Callback for IQVIA model",
"pageID": "164469742",
"pageLink": "/display/GMDM/HCONames+Callback+for+IQVIA+model",
"content": "DescriptionThe HCO names callback is responsible for calculating HCO Names. At first events are filtered, deduplicated and the list of impacted hcp is being evaluated. Then the new HCO are calculated. And finally if there is a need for update, the updates are being send for asynchronous processing in HUB Callback SourceFlow diagramSteps1. Impacted HCP GeneratorListen for the events on the ${env}-internal-callback-hconame-in topic.Filter out against the list of predefined countries (AI, AN, AG, AR, AW, BS, BB, BZ, BM, BO, BR, CL, CO, CR, CW, DO, EC, GT, GY, HN, JM, KY, LC, MX, NI, PA, PY, PE, PN, SV, SX, TT, UY, VG, VE).Filter out against the list of predefined event types (HCO_CREATED, HCO_CHANGED, RELATIONSHIP_CREATED, RELATIONSHIP_CHANGED).Split into two following branches. Results of both are then published on the ${env}-internal-callback-hconame-hcp4calc.Entity Event Stream1 extract the "Name" attribute from the target entity.2. reject the event if "Name" does not exist3. check if there was already a record with the identical Key + Name pair (a duplicate)4. reject the duplicate5. find the list of impacted HCPs based on the key6. return a flat stream of the key and the liste.g. key: entities/dL144Hk, impactedHCP: 1, 2, 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)Relation Event Stream1. map Event to RelationWrapper(type,uRI,country,startURI,endURI,active,startObjectType,endObjectTyp)2. reject if any of fields missing3. check if there was already a record with the identical Key + Name pair (a duplicate)4. reject the duplicate5. find the list of impacted HCPs based on the key6. return a flat stream of the key and the liste.g. key: entities/dL144Hk, impactedHCP: 1, 2, 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)2. HCO Names Update StreamListen for the events on the ${env}-internal-callback-hconame-hcp4calc.The incoming list of HCPs is passed to the calculator (described below).The HcoMainCalculatorResult contains hcpUri, a list of entityAddresses and the mainWorkplaceUri (to update)The result is being mapped to the RelationRequest The RelationRequest is generated to the "${env}-internal-hconames-rel-create" topic.3. HCP Calc Alogithmcalculate HCO NameHCOL1: get HCO from mongo where uri equals HCP.attributes.Workplace.refEntity.urireturn HCOL1.Namecalculate MainHCONameget all target HCO for relations (paremeter traverseRelationTypes) when start object id equals HCOL1 uri.for each target HCO (curHCO) doif target HCO is last in hierarchy thenreturn HCO.attributes.Nameelse if target HCO.attributes.TypeCode.lookupCode is on the configured list defined by parameter mainHCOTypeCodes for selected countryreturn HCO.attributes.Nameelse if target HCO.attributes.Taxonomy.StrType.lookupCode is on the configured list defined by parameter mainHCOStructurTypeCodes for selected countryreturn HCO.attributes.Nameelse if target HCO.attributes.ClassofTradeN.FacilityType.lookupCode is on the configured list defined by parameter mainHCOFacilityTypeCodes for selected countryreturn HCO.attributes.Nameelseget all target HCO when start object id is curHCO.uri (recursive call)update HCP addressesfind address in HCP.attributes.Address when Address.refEntity.uri=HCOL1.uriif found and address.HCOName<>calcHCOName or address.MainHcoName<>calcMainHCOName thencreate/update HasAddress relation using HUBCallback sourceTriggers*Filter whole tableHide columnsReset all filtersCopy the filter URLExport to PDFExport to CSVExport to WordPrintDocumentationWhat's newRate our appOops, it seems that you need to place a table or a macro generating a table within the Table Filter macro.Trigger actionComponentActionDefault timeIN Events incoming mdm-callback-service:HCONamesUpdateStream (callback package)Evaluates the list of affected HCPs. Based on that the HCO updates being sent when needed.realtime - events stream\n\n\n\n\nDependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerAsynchronous process of generated eventsHub StoreCache"
},
{
"title": "HCONames Callback for COMPANY model",
"pageID": "243863711",
"pageLink": "/display/GMDM/HCONames+Callback+for+COMPANY+model",
"content": "DescriptionHCONames Callback for COMPANY data model differs from the one for IQVIA model.Callback consists of two stages: preprocessing and main processing. Main processing stage takes in HCP URIs, so the preprocessing stage logic extracts such affected HCPs from HCO, HCP, RELATIONSHIP events.During main processing, Callback calculates trees, where nodes are HCOs (tree root is always the input HCP) and edges are Relationships. HCOs and MainHCOs are extracted from this tree. MainHCOs are chosen following some business specification from the Callback config. Direct Relationships from HCPs to MainHCOs are created (or cleaned if no longer applicable). If any of HCP's Addresses matches HCO/MainHCO Address, adequate sub-attribute is added to this Address.AlgorithmStage I - preprocessingInput topic: ${env}-internal-callback-hconame-inInput event types:HCO_CREATEDHCO_CHANGEDHCP_CREATEDHCP_CHANGEDRELATIONSHIP_CREATEDRELATIONSHIP_CHANGEDFor each HCO event from the topic:Deduplicate events by key (deduplication window size is configurable),using MongoDB entityRelations collection, build maximum dependency tree (recursive algorithm) consisting of HCPs and HCOs connected with:relations of type equal to hcoHcoTraverseRelationTypes from configuration,relations of type equal to hcoHcpTraverseRelationTypes from configuration,return all HCPs from the dependency tree (all visited HCPs),generate events having key and value equal to HCP uri and send to the main processing topic (${env}-internal-callback-hconame-hcp4calc).For each RELATIONSHIP event from the topic:Deduplicate events by key (deduplication window size is configurable),if relation's startObject is HCP:add HCP's entityURI to result list,if relation's startObject is HCO: similarly to HCO events preprocessing, build dependency tree and return all HCPs from the tree. HCP URIs are added to the result list,for each HCP on the result list, generate an event and send to the main processing topic (${env}-internal-callback-hconame-hcp4calc).For each HCP event from the topic:Deduplicate events by key (deduplication window size is configurable),generate events having key and value equal to HCP uri and send to the main processing topic (${env}-internal-callback-hconame-hcp4calc).Stage II - main processingInput topic: ${env}-internal-callback-hconame-hcp4calcFor each HCP from the topic:Deduplicate by entity URI (deduplication window size is configurable),fetch current state of HCP from MongoDB, entityHistory collection,traversing by HCP-HCO relation type from config, find all affiliated HCOs with "CON" descriptors,traversing by HCO-HCO relation type from config, find all affiliated HCOs with MainHCO: "REL.MAI" or "REL.HIE" descriptors,from the "CON" HCO list, find all MainHCO candidates - MainHCO candidate must pass the configured specification. Below is MainHCO spec in EMEA PROD:if not yet existing, create new HcoNames relationship to MainHCO candidates by generating a request and sending to Manager async topic: ${env}-internal-hconames-rel-create,if existing, but not on candidates list, delete the relationship by generating a request and sending to Manager async topic: ${env}-internal-async-all-cleaner-callbacks,if one of input HCP's Addresses matches HCO Address or MainHCO Address, generate a request adding "HCO" or "MainHCO" sub-attribute to the Address and send to Manager async topic: ${env}-internal-hconames-hcp-create.Processing events1. Find Impacted HCPListen for the events on the ${env}-internal-callback-hconame-in topic.Filter out against the list of predefined countries (GB, IE).Filter out against the list of predefined event types (HCO_CREATED, HCO_CHANGED, RELATIONSHIP_CREATED, RELATIONSHIP_CHANGED).Split into two following branches. Results of both are then published on the ${env}-internal-callback-hconame-hcp4calc.Entity Event Stream1 extract the "Name" attribute from the target entity.2. reject the event if "Name" does not exist3. check if there was already a record with the identical Key + Name pair (a duplicate)4. reject the duplicate5. find the list of impacted HCPs based on the key6. return a flat stream of the key and the liste.g. key: entities/dL144Hk, impactedHCP: 1, 2, 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)Relation Event Stream1. map Event to RelationWrapper(type,uRI,country,startURI,endURI,active,startObjectType,endObjectTyp)2. reject if any of fields missing3. check if there was already a record with the identical Key + Name pair (a duplicate)4. reject the duplicate5. find the list of impacted HCPs based on the key6. return a flat stream of the key and the liste.g. key: entities/dL144Hk, impactedHCP: 1, 2, 3 return (entities/dL144Hk, 1), (entities/dL144Hk, 2), (entities/dL144Hk, 3)2. Select HCOs affiliated with HCPListen for incoming list of HCPs on the ${env}-internal-callback-hconame-hcp4calc.For each HCP a list of affiliated HCOs is retrieved from a database. HCP-HCO relation is based on type:configuration/relationTypes/ContactAffiliationsand description:"CON"3. Find Main HCO traversing HCO-HCO hierarchyFor each HCO from the list of selected HCOs above a list of HCO is retrieved from the database.  HCO-HCO relation is based on type:configuration/relationTypes/OtherHCOtoHCOAffiliationsand description:"RLE.MAI", "RLE.HIE"The step is being repeated recursively until there are no affiliated HCOs or the Subtype matches the one provided in configuration.mainHcoIndicator.subTypeCode (STOP condition)The result is being mapped to the RelationRequest The RelationRequest is generated to the "${env}-internal-hconames-rel-create" topic.4. Populate HcoName / Main HCO Name in HCP addresses if required So far there are two HCO lists: HCOs affiliated with HCP and Main HCOs.There's a check if HCP fields HCOName and MainHCOName which are also two lists match the HCO names.If not, then the HCP update event is being generated.Address is nested attribute in the model Matching by uri must be replaced by matching by the key on attribute values. The match key will include AddressType, AddressLine1, AddressLine2,City,StateProvinance, Zip5.The same key is configured in Reltio for address deduping. Changes the address key in Reltio must be consulted with HUB teamThe target attributes in addresses will be populated by creating new HCP address having the same match key + HCOName and MainHCOName by HubCallback source. Reltio will match the new address with the existing based on the match key.Each HCP address will have own HUBCallback crosswalk {type=HUB_Callback, value={Address Attribute URI}, sourceTable=HCO_NAME}4. Create HCO -> Main HCO affiliation if not exist Also there's a check if the HCP outgoing relations point to Main HCOs. Only relations with the type "configuration/relationTypes/ContactAffiliations"and description"MainHCO" are being considered.Appropriate relations need to be created and not appropriate removed.Data model DependenciesComponentUsageCallback ServiceMain component with flow implementationPublisherRoutes incoming eventsManagerAsync processing of generated requests"
},
{
"title": "NotMatch Callback",
"pageID": "164469859",
"pageLink": "/display/GMDM/NotMatch+Callback",
"content": "DescriptionThe NotMatch callback was created to clear the potential match queue for the suspect matches when the Linkage has been created by the DerivedAffiliationsbatch process. During this batch process, affiliations are created between COV and ONEKEY HCO objects. The potential match queue is not cleared and this impacts the Data Steward process because DS does not know what matches have to be processed through the UI. Potential match queue is cleared during RELATIONSHIP events processing using the "NotMatch callback" process. The process invokes _notMatch operation in MDM and removed these matches from Reltio. All "_notMatch" matches are visible in the UI in the "Potental Matches"."Not a Match" TAB. Flow diagramStepsEvent publisher publishes simple events to $env-internal-callback-potentialMatchCleaner-in including RELATIONSHIP_CHANGED and RELATIONSHIP_CREATED events with Reltio source (limit to only the one loaded through DA batch)Only events with the correct event type are processed: RELATIONSHIP_CHANGED and RELATIONSHIP_CREATEDOnly events with the correct relationship type are processed. Accepted relationship types:FlextoHCOSAffiliationsFlextoDDDAffiliationsFlextoDDDAffiliationsThe HUB AUTOLINK Store is searchedif AUTOLINK match exists in the store _notMatch operation is executed in asynchronous modeelse event is skippedAll _notMatch operations are published to the $env-internal-async-all-notmatch-callbacks topic and the Manager process these operations in asynchronous mode. TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-service:PotentialMatchLinkCleanerStreamprocess relationship events in streaming mode and sets _notMatch in MDMrealtime - events streamDependent componentsComponentUsageCallback ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerReltio Adapter for _notMatch operation in asynchronous modeHub StoreMatches Store"
},
{
"title": "PotentialMatchLinkCleaner Callback",
"pageID": "302702435",
"pageLink": "/display/GMDM/PotentialMatchLinkCleaner+Callback",
"content": "DescriptionAlgorithmCallback accepts relationship events - this is configurable, usually:RELATIONSHIP_CREATEDRELATIONSHIP_CHANGEDFor each event from inbound topic (${env}-internal-callback-potential-match-cleaner-in):event is filtered by eventType (acceptedRelationEventTypes list in configuration),event is filtered by relationship type (acceptedRelationObjectTypes list in configuration),extract startObjectURI and endObjectURI from event targetRelation,search MongoDB, collection entityMatchesHistory, for records having both URIs in matches and having same matchType (matchTypesInCache list in configuration),if found a record in cache, check if it has already been sent (boolean field in the document),if record has not been yet sent, generate a EntitiesNotMatchRequest containing two fields:sourceEntityURI,targetEntityURI,add the operation header and send the Request to Manager.DependenciesComponentUsageCallback ServiceMain component with flow implementationPublisherRoutes incoming eventsManagerAsync processing of generated requests"
},
{
"title": "PreCallbacks (Rankings/COMPANYGlobalCustomerId/Canada Micro-Bricks/HCPType)",
"pageID": "164469756",
"pageLink": "/pages/viewpage.action?pageId=164469756",
"content": "DescriptionThe main part of the process is responsible for setting up the Rank attributes on the specific Attributes in Reltio. Based on the input JSON events, the difference between the RAW entity and the Ranked entity is calculated and changes shared through the asynchronous topic to Manager. Only events that contain no changes are published to the next processing stage, it limits the number of events sent to the external Clients. Only data that is ranked and contains the correct callback is shared further. During processing, if changes are detected main events are skipped and a callback is executed. This will cause the generation of new events in Reltio and the next calculation. The next calculation should detect 0 changes but that may occur that process will fall into an infinity loop. Due to this, the MD5 checksum is implemented on the Entity and AttributeUpdate request to percent such a situation. The PreCallback is the setup with the chain of responsibility with the following steps:Enricher Processor Enrich object with RefLookup serviceMultMergeProcessor - change the ID of the main entity to the loser Id when the Main Entity is different from Target Entity - it means that the merge happened between timestamp when Reltio generated the EVENT and HUB retrieved the Entitty from Reltio. In that case the outcome entity contains 3 ID <New Winner, Old Winner as loser, loser>RankSorters Calculate rankings - transform entity with correct Ranks attributesBased on the calculated rank generate pre-callback events that will be sent to MangerGlobal COMPANY ID callback Generation of changes on COMPANYGlobalCustomerIDs <if required when there is a need to fix the ID>Canada Micro-Bricks Autofill Canada Micro-BricksHCPType Callback Calculate HCPType attribute based on Specilaity and SubTypeCode canonical Reltio codes. Cleaner Processor Clean reference attributes enriched in the first step (save in mongo only when cleanAdditionalRefAttributes is false)Inactivation Generator Generation of inactivated events (for each changed event)OtherHCOtoHCOAffiliations Rankings Generation of the event to full-delay topic to process Ranking changes on relationships objects Flow diagramStepsEntity Enricher publishes full enriched events to ${env}-internal-reltio-full-eventsThe event is enriched with additional data required in the ranking process. More details in Affiliation RankSorter that require enrichment of the HCO objects once ranking the Affiliation on HCP. Rankings are calculated based on the implemented RankSorters. Based on the activation criteria and the environment configuration the following Rank Sorters are activated:Address RankSorterAddresses RankSorterAffiliation RankSorterEmail RankSorterPhone RankSorterSpecialty RankSorterIdentifier RankSorterBased on the changes between sorted Entity and input entity, Callback is published to the next processing stage. In that case, Main Event is skipped.If no new changes are detected, Main Event is forwarder to further processing.The enriched data required in the Affiliation ranking is cleaned. This last step check the incoming event and generates an additional *_INACTIVATED event type once the Entity/Relation object contains EndDate (is inactive) TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-service:PrecallbackStream (precallback package)Process full events, execute ranking services, generates callbacks, and published calculated events to the EventPublisher componentrealtime - events streamDependent componentsComponentUsageCallback ServiceMain component with flow implementationEntity EnricherGenerates incoming events full eventsManagerProcess callbacks generated by this serviceHub StoreCache-Store"
},
{
"title": "Global COMPANY ID callback",
"pageID": "218447103",
"pageLink": "/display/GMDM/Global+COMPANY+ID+callback",
"content": "Proces provides a unique Global COMPANY ID to each entity. The current solution on the Reltio side overwrites an entity's Global COMPANY ID when it loses a merge. Global COMPANY ID pre-callback solution was created to contain Global COMPANY Id as a unique value for entity_uri.To fulfill the requirement a solution based on COMPANY Global ID Registry is prepared. It includes elements like below:Modification on Orchestrator/Manager side - during the entity creation processCreation of COMPANYGloballId Pre-callback Modification on entity history to enrich search processLogical ArchitectureModification on Orchestrator/Manager side - during the entity creation processProcess descriptionThe request is sent to the HUB Manager - it may come from each source allowed. Like ETL loading or direct channel. getCOMPANYIdOrRegister service is call and entityURI with COMPANYGlobalId is stored in COMPANYIdRegistry From an external system point of view, the response to a client is modified. COMPANY Global Id is a part of the main attributes section in the JSON file (not in a nest). In response, there are information about OVI true and false{    "uri": "entities/19EaDJ5L",    "status": "created",    "errorCode": null,    "errorMessage": null,    "COMPANYGlobalCustomerID": "04-125652694",    "crosswalk": {        "type": "configuration/sources/RX_AUDIT",        "value": "test1_104421022022_RX_AUDIT_1",        "deleteDate": ""    }}{    "uri": "entities/entityURI",    "type": "configuration/entityTypes/HCP",    "createdBy": "username",    "createdTime": 1000000000000,    "updatedBy": "username",    "updatedTime": 1000000000000,"attributes": {        "COMPANYGlobalCustomerID": [            {                "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",                "ov": true,                "value": "04-111855581",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrkG2D"            },            {                "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "04-123653905",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrosrm"            },            {                "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "04-124022162",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrhcNY"            },            {                "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "04-117260591",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1jVIrnM10"            },            {                "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "04-129895294",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/1mrOsvf6P"            },            {                "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "04-112615849",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/2ZNzEowk3"            },            {                "type": "configuration/entityTypes/HCP/attributes/COMPANYGlobalCustomerID",                "ov": false,                "value": "04-111851893",                "uri": "entities/10FoKMrf/attributes/COMPANYGlobalCustomerID/2LG7Grmul"            }        ],3. How to store GlobalCOMPANYId process diagram - business level.Creation of COMPANYGlobalId Pre-callbackA publisher event model is extended with two new values:COMPANYGlobalCustomerIDs - list of ID. For some merge events, there is two entityURI ID. The order of the IDs must match the order of the IDs in entitiURI field.parentCOMPANYGlobalCustomerID - it has value only for the LOST_MERGE event type. It contains winner entityURI.data class PublisherEvent(val eventType: EventType?,                          val eventTime: Long? = null,                          val entityModificationTime: Long? = null,                          val countryCode: String? = null,                          val entitiesURIs: List<String> = emptyList(),                          val targetEntity: Entity? = null,                          val targetRelation: Relation? = null,                          val targetChangeRequest: ChangeRequest? = null,                          val dictionaryItem: DictionaryItem? = null,                          val mdmSource: String?,                          val viewName: String? = DEFAULT_VIEW_NAME,                          val matches: List<MatchItem>? = null,                          val COMPANYGlobalCustomerIDs: List<String> = emptyList(),                          val parentCOMPANYGlobalCustomerID: String? = null,                          @JsonIgnore                          val checksumChanged: Boolean = false,                          @JsonIgnore                          val isPartialUpdate: Boolean = false,                          @JsonIgnore                          val isReconciliation: Boolean = falseThere are made changes in  entityHistory collection on MongoDB sideFor each object in a collection, we store also COMPANYGlobalCustomerID:to have a relation between entityURI and COMPANYGLobalCustomerId to make a possible search for an entity that lost merge Additionally, new fields are stored in the Snowflake structure in %_HCP and %_HCO views in CUSTOMER_SL schema, like:COMPANY_GLOBAL_CUSTOMER_IDPARENT_COMPANY_GLOBAL_CUSTOMER_IDFrom an external system point of view, those internal changes are prepared to make a GlobalCOMPANYID filed unique.In case of overwriting GLobalCOMPANYID on Reltio MDM side (lost merge) pre-callback main task is to search for an original value in COMPANYIfRegistry. It will then insert this value into that entity in Reltio MDM that has been overwritten due to lost merge.Process diagram: Search LOST_MERGE entity with its first Global COMPANY IDProcess diagram:Process description:MDM HUB gets SEARCH calls from an external system. The search parameter is Global COMPANY ID.Verification entity status.  If entity status is 'LOST_MERGE' then replace in search request PfiezrGlobalCustomerId to parentCOMPANYGlobalCustomerIdMake a search call in Reltio with enriched dataDependent components"
},
{
"title": "Canada Micro-Bricks",
"pageID": "250138445",
"pageLink": "/display/GMDM/Canada+Micro-Bricks",
"content": "DescriptionThe process was designed to auto-fill the Micro Brick values on Addresses for Canadian market entities. The process is based on the events streaming, the main event is recalculated based on the current state and during comparison, the current mapping file the changes are generated. The generated change (partial event) updates the Reltio which leads to another change. Only when the entity is fully updated the main event is published to the output topic and processed in the next stage in the event publisher. The process also registers the Changelog events on the topic. the Changelog events are saved only when the state of the entity is not partial. The Changelog events are required in the ReloadService that is triggered by the Airflow DAG. Business users may change the mapping file, this triggers the reload process, changelog events are processed and the updates are generated in reltio.For Canada, we created a new brick type "Micro Brick" and implemented a new pre-callback service to populate the brick codes based on the postal code mapping file:95% of postal codes won't be in the file and the MicroBrick code should be set to the first characters of the postal codeThe mapping file will contain postal code - MicroBrick code pairsThe mapping file will be delivered monthly, usually with no change.  However, 1-2 a year the Business will go thru a re-mapping exercise that could cause significant change.  Also, a few minor changes may happen (e.g., add new pair, etc.). A month change process will be added to the Airflow scheduler as a DAG. This DAG will be scheduled and will generate the export from the Snowflake, when there will be mapping changes changelog events will trigger update to the existing MicroBrick codes in Reltio. A new BrickType code has been added for Micro Brick - "UGM"Flow diagramLogical ArchitecturePreCallback LogicReload LogicStepsOverview Reltio attributesBrick"uri": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Brick",                Brick Type:                RDM: A new BrickType code has been added for Micro Brick - "UGM"                                    "uri": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Brick/attributes/Type",                                    "lookupCode": "rdm/lookupTypes/BrickType",                Brick Value:                                    "uri": "configuration/entityTypes/HCO/attributes/Addresses/attributes/Brick/attributes/Value",                                    "lookupCode": "rdm/lookupTypes/BrickValue",PostalCode:"uri": "configuration/entityTypes/HCP/attributes/Addresses/attributes/Zip5",Canada postal codes format:e.g: K1A 0B1PreCallback LogicFlow:Activation:Check if feature flag activation is true and the acceptedCountires list contains entity countryTake into account only the CHANGED and CREATED events in this pre-callback implementationSteps:For each address in the entity check:Check if the Address contains BrickType= microBrickType and BrickValue!=null and PostalCode!=nullCheck if PostalCode is in the micro-bricks-mapping.csv fileif true compareif different generate UPDATE_ATTRIBUTEif in sync add AddressChange with all attributes to MicroBrickChangelogif false compare BrickValue with “numberOfPostalCodeCharacters” from PostalCodeif different generate UPDATE_ATTRIBUTEif in sync add AddressChange with all attributes to MicroBrickChangelogCheck if Address does not contain BrickType= microBrickType and BrickValue==null and PostalCode !=nullcheck if PostalCode is in the micro-bricks-mapping.csv fileif true generate INSERT_ATTRIBUTEif false get “numberOfPostalCodeCharacters” from PostalCode and generate INSERT_ATTRIBUTEAfter the Addresses array is checked, the main event is blocked when partial. Only when there are 0 changes main event is forwardedif there are changes send partialUpdate and skip the main event depending on the forwardMainEventsDuringPartialUpdateif there are 0 changes send MainEvent and push MicroBrickChangelog to the changelog topicNote: The service contains 2 roles the main role is to check PostalCode for each address with a mapping file and generate MicroBrick Changes (INSERT (initial) UPDATE (changes)). The second role is to push MicroBrickChangelog events when we detected 0 changes. It means this flow should keep in sync the changelog topic with all changes that are happening in Reltio (address was added/removed/changed). Because ReloadService will work on these changelog events and requires the exact URI to the BrickValue this service needs to push all MicroBrickChangelog events with calculatedMicroBrickUri and calculatedMicroBrickValue and current value on postalCode for specific address represented by the address URI.Reload Logic (Airflow DAG)Flow: ActivationBusiness users make changes on the Snowflake side to micro bricks mapping.StepsDAG is scheduled once a month and process changes made by the Business users, this triggers the Reload Logic on Callback-Service componentsGet changes from snowflake and generate the micro-bricks-mapping.csv fileIf there are 0 changes END the processIf there are change in the micro-bricks-mapping.csv file push the changes to the Consul. Load current Configuration to GIT and push micro-bricks-mapping.csv to Consul.Trigger API call on Callback-Service to reload Consul configuration - this will cause that Pre-Callback processors and the ReloadService will now use new mapping files. Only after this operation is successful go to the next step:Copy events from current topic to reload topic using tmp fileCopy events from current topic to reload topic using temporary fileNote: the micro-brick process is divided into 2 steps Pre-Callback generated ChangeLog events to the $env-internal-microbricks-changelog-eventsReload service is reading the events from $env-internal-microbricks-changelog-reload-eventsThe main goal here is to copy events from one topic to another using Kafka Console Producer and Consumer. Copy is made by the Kafka Console Consumer, we are generating a temporary file with all events, Consumer has to poll all events, and wait 2 min until no new events are in the topic. After this time Kafka Console Producer should send all events to the target topic.After events are in the target $env-internal-microbricks-changelog-reload-events topic the next step described below starts automatically. Reload Logic (Callback-Service)Flow:Activation:Callback-Service Exposes API to reload Consul Configuration - because these changes are made once per month max, there is no need to schedule this process in service internally. Reload is made by the DAG and reloads mapping file inside callback-service.Only after Consul Configuration is reloaded the events are pushed from the $env-internal-microbricks-changelog-events to the $env-internal-microbricks-changelog-reload-events.This triggers the MicroBrickReloadService because it is based on the Kafka-Streams service is subscribing to events in real-timeSteps:New events to the $env-internal-microbricks-changelog-reload-events will trigger the following:Kafka Stream consumer that will read the changelogTopicFor each MicroBrickChangelog event check:for each address in addresses changes check:check if PostalCode is in the micro-bricks-mapping.csv fileif true and the current mapping value is different than calculatedMicroBrickValue  → generate UPDATE_ATTRIBUTEif false and calculatedMicroBrickValue is different than “numberOfPostalCodeCharacters” from PostalCode → generate UPDATE_ATTRIBUTEGather all changes and push them to the $env-internal-async-all-bulk-callbacksThe reload is required because it may happen that:A new row was addedThen AddressChange.postalCode will be in the micro-bricks-mapping.csv which means that calculatedMicroBrickValue will be different than the one that we now have in the mapping file so we need to trigger UPDATE_ATTRIBUTE.The existing row was updatedThen AddressChange.postalCode will be in the micro-bricks-mapping.csv and the calculatedMicroBrickValue will be different than the one that we now have in the mapping file so we need to trigger UPDATE_ATTRIBUTEThe existing row was removedThen AddressChange.postalCode will be missing in the mapping file, then we are going to compare calculatedMicroBrickValue with “numberOfPostalCodeCharacters” from PostalCode, this will be a difference so UPDATE_ATTRIBUTE will be generatedNote: The data model requires the calculatedMicroBrickUri because we need to trigger UPDATE_ATTRIBUE on the specified BrickValue on a specific Address so an exact URI is required to work properly with the Reltio UPDATE_ATTRIBUTE operation. Only INSERT_ATTRIBUTE requires the URI only on the address attribute, and the body will contain BrickType and BrickValue (this insert is handled in the pre-callback implementation). The changes made by ReloadService will generate the next changes after the mapping file was updated. Once we trigger this event Reltio will generate the change, this change will be processed by the pre-callback service (MicroBrickProcessor). The result of this processor will be no-change-detected (entity and mapping file are in sync) and new CHANGELOG event generation. It may happen that during ReloadService run new Changelog events will be constantly generated, but this will not impact the current process because events from the original topic to the target topic are triggered by the manual copy during reloading. Additionally, 24h compaction window on Kafka will overwrite old changes with new changes generated from pre-callback. So we will have only one newest key on kafka topic after this time, and these changes will be copied to reload process after the next business change (1-2 times a year)Attachment docs with more details:IMPL: TEST:Data Model and ConfigurationChangeLog Event\nCHANGELOG Event:\n\nKafka KEY: entityUri\n\nBody:\ndata class MicroBrickChangelog(\n val entityUri: String,\n val addressesChanges: List<AddressChange>,\n)\ndata class AddressChange(\n val addressUri: String,\n val postalCode: String,\n val calculatedMicroBrickUri: String,\n val calculatedMicroBrickValue: String,\n)\n\n\nTriggersTrigger actionComponentActionDefault timeIN Events incoming Callback Service: Pre-Callback:Canada Micro-Brick LogicFull events trigger pre-callback stream and during processing, partial events are processed with generated changes. If data is in sync partial event is not generated, and the main event is forwarded to external clientsrealtime - events streamUser  - triggers a change in mappingAPI: Callback-service - sync consul ConfigurationPre-Callback:ReloadService - streamingThe business user changes the mapping file. Process refreshed Consul store, copies data to changelog topic and this triggers real-time processing on Reload serviceManual Trigger by Business Userrealtime - events streamDependent componentsComponentUsageCallback ServiceMain component of flow implementationEntity EnricherGenerates incoming events full eventsManagerProcess callbacks generated by this service"
},
{
"title": "RankSorters",
"pageID": "302687133",
"pageLink": "/display/GMDM/RankSorters",
"content": ""
},
{
"title": "Address RankSorter",
"pageID": "164469761",
"pageLink": "/display/GMDM/Address+RankSorter",
"content": "GLOBAL - IQVIA modelThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Address provided by source "Reltio" is higher in the hierarchy than the Address provided by "CRMMI" source. Based on this configuration, each specialty will be sorted in the following order:addressSource: "Reltio": 1 "EVR": 2 "OK": 3 "AMPCO": 4 "JPDWH": 5 "NUCLEUS": 6 "CMM": 7 "MDE": 8 "LocalMDM": 9 "PFORCERX": 10 "VEEVA_NZ": 11 "VEEVA_AU": 12 "VEEVA_PHARMACY_AU": 13 "CRMMI": 14 "FACE": 15 "KOL_OneView": 16 "GRV": 17 "GCP": 18 "MAPP": 19 "CN3RDPARTY": 20 "Rx_Audit": 21 "PCMS": 22 "CICR": 23Additionally, Address Rank Sorting is based on the following configuration:Address will be sorted based on AddressType attribute in the following order:addressType: "[TYS.P]": 1 "[TYS.PHYS]": 2 "[TYS.S]": 3 "[TYS.L]": 4 "[TYS.M]": 5 "[Mailing]": 6 "[TYS.F]": 7 "[TYS.HEAD]": 8 "[TYS.PHAR]": 9 "[Unknown]": 10Address will be sorted based on ValidationStatus attribute in the following order:addressValidationStatus: "[STA.3]": 1 "[validated]": 2 "[Y]": 3 "[STA.0]": 4 "[pending]": 5 "[NEW]": 6 "[RNEW]": 7 "[selfvalidated]": 8 "[SVALD]": 9 "[preregister]": 10 "[notapplicable]": 11 "[N]": 97 "[notvalidated]": 98 "[STA.9]": 99Address will be sorted based on Status attribute in the following order:addressStatus: "[VALD]": 1 "[ACTV]": 2 "[INAC]": 98 "[INVL]": 99Address rank sort process operates under the following conditions:First, before address ranking the Affiliation RankSorter have to be executed. It is required to get the appropriate value on the Workplace.PrimaryAffiliationIndicator attribute valueEach address is sorted with the following rules:sort by the PrimaryAffiliationIndicator value. The address with "true" values is ranked higher in the hierarchy. The attribute used in this step is taken from the Workplace.PrimaryAffiliationIndicatorsort by Validation Status (lowest rank from the configuration on TOP) - attribute Address.ValidationStatussort by Status (lowest rank from the configuration on TOP) - attribute Address.Statussort by Source Name (lowest rank from the configuration on TOP) - this is calculated based on the Address.RefEntity.crosswalks, means that each address is associated with the appropriate crosswalk and based on the input configuration the order is caluclated.sort by Primary Affiliation (true value wins against false value) - attribute Address.PrimaryAffiliationsort by Address Type (lowest rank from the configuration on TOP) - attribute Address.AddressTypesort by Rank (lowers rank on TOP) in descending order 1 -> 99 - attribute Address.AddressRanksort by LUD (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute Address.RefEntity.crosswalks.updateDatesort by Label value alphabetically in ascending order A -> Z - attribute Address.labelSorted addresses are recalculated for the new Rank each Address Rank is reassigned with an appropriate number from lowest to highest.Additionally:When refRelation.crosswalk.deleteDate exists, then the address is excluded from the sorting processWhen recalculated Address Rank has a value equal to "1" then BestRecord attribute is added with the value set to "true"Address rank sort process fallback operates under the following conditions:During Validation Status from configuration (, 1.b) sorting, when ValitdationStatus attribute is missing address, is placed on 90 position ( which means that empty validation status is higher in the ranking than e.g. STA.9 status)During Status from configuration (1.c) sorting when the Status attribute is missing address is placed on 90 position (which means that empty status is higher in the ranking than e.g. INAC status)When Source system name (1.d) is missing address, address is placed on 99 positionWhen address Type (1.e) is empty, address is placed on 99 positionWhen Rank (1.f) is empty, address is placed on 99 positionFor multiple Address Types for the same relation an address with a higher rank is takenBusiness requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*"
},
{
"title": "Addresses RankSorter",
"pageID": "164469759",
"pageLink": "/display/GMDM/Addresses+RankSorter",
"content": "GLOBAL USThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Address provided by source "ONEKEY" is higher in the hierarchy than the Address provided by "COV" source. Configuration is divided by country and source lists, for which this order is applicable.  Based on this configuration, each address will be sorted in the following order:addressesSource: - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio" : 1 "ONEKEY" : 2 "IQVIA_RAWDEA" : 3 "IQVIA_DDD" : 4 "HCOS" : 5 "SAP" : 6 "SAPVENDOR" : 7 "COV" : 8 "DVA" : 9 "ENGAGE" : 10 "KOL_OneView" : 11 "ONEMED" : 11 "ICUE" : 12 "DDDV" : 13 "MMIT" : 14 "MILLIMAN_MCO" : 15 "SHS": 16 "COMPANY_ACCTS" : 17 "IQVIA_RX" : 18 "SEAGEN": 19 "CENTRIS" : 20 "ASTELAS" : 21 "EMD_SERONO" : 22 "MAPP" : 23 "VEEVALINK" : 24 "VALKRE" : 25 "THUB" : 26 "PTRS" : 27 "MEDISPEND" : 28 "PORZIO" : 29 Additionally, Addresses Rank Sorting is based on the following configuration:The address will be sorted based on AddressType attribute in the following order:addressType: "[OFFICE]": 1 "[PHYSICAL]": 2 "[MAIN]": 3 "[SHIPPING]": 4 "[MAILING]": 5 "[BILLING]": 6 "[SOLD_TO]": 7 "[HOME]": 8 "[PO_BOX]": 9Address rank sort process operates under the following conditions:Each address is sorted with the following rules:sort by address status (active addresses on top) - attribute Status (is Active)sort by the source order number from input source order configuration (lowest rank from the configuration on TOP) - source is taken from last updated crosswalk Addresses.RefEntity.crosswalks.updateDate once multiple from the same sourcesort by DEA flag (HCP only with DEA flag set to true on top) - attribute DEAFlagsort by SingleAddressIndicator (true on top) - attribute SingleAddressIndsort by Source Rank (lowers rank on TOP) in descending order 1 -> 99 - for ONEKEY rank is calculated with minus sign - attribute Source.SourceRanksort by address type of HCO and MCO only (lowest rank from the configuration on TOP) - attribute AddressTypesort by COMPANYAddressId (addresses with this attribute are on top) - attribute COMPANYAddressIDSorted addresses are recalculated for new Rank each Address Rank is reassigned with an appropriate number from lowest to highest - attribute AddressRankAdditionally:When refRelation.crosswalk.deleteDate exists, then the address is excluded from the sorting processMORAWM03 explaining reverse rankings for ONEKEY Addresses:Here is the clarification:The minus rank can be related only to ONEKEY source and will be related to the lowest precedence address.All other sources, different than ONEKEY, contains the normal SourceRank source precedence - it means that the SourceRank 1 will be on top. We will sort SourceRank attribute in ascending order 1 -> 99 (lowest source rank on TOP), so SourceRank 1 will be first, SourceRank 2 second and so on.Due to the ONEKEY data in US - That rank code is a number from 10 to -10 with the larger number (i.e., 10) being the top ranked. We have a logic that makes an opposite ranking on ONEKEY SourceRank attribute. We are sorting in descending order …10 -> -10…, meaning that the rank 10 will be on TOP (highest source rank on TOP)We have reverse the SourceRank logic for ONEKEY, otherwise it led to -10 SourceRank ranked on TOP.In US ONEKEY Addresses contains minus sign and are ranked in descending order. (10,9,8…-1,-2..-10)I am sorry for the confusion that was made in previous explanation.This opposite logic for ONEKEY SourceRank data is in:Addresses: https://confluence.COMPANY.com/display/GMDM/Addresses+RankSorterDOC:EMEA/AMER/APACThis feature requires the following configuration:Address SourceThis map contains sources with appropriate sort numbers, which means e.g. Configuration is divided by country and source lists, for which this order is applicable. Address provided by source "Reltio" is higher in the hierarchy than the Address provided by "ONEKEY" source. Based on this configuration, each address will be sorted in the following order:EMEAaddressesSource: - countries: - GB - IE - FK - FR - BL - GP - MF - MQ - NC - PF - PM - RE - TF - WF - ES - DE - IT - VA - SM - TR - RU rankSortOrder: Reltio: 1 ONEKEY: 2 SAP: 3 SAPVENDOR: 4 PFORCERX: 5 PFORCERX_ODS: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 SEAGEN: 9 GRV: 10 GCP: 11 SSE: 12 BIODOSE: 13 BUPA: 14 CH: 15 HCH: 16 CSL: 17 1CKOL: 18 VEEVALINK: 19 VALKRE: 201 THUB: 21 PTRS: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 MEDPAGESHCP: 3 MEDPAGESHCO: 3 SAP: 4 SAPVENDOR: 5 ENGAGE: 6 MAPP: 7 PFORCERX: 8 PFORCERX_ODS: 8 KOL_OneView: 9 ONEMED: 9 SEAGEN: 10 GRV: 11 GCP: 12 SSE: 13 SDM: 14 PULSE_KAM: 15 WEBINAR: 16 DREAMWEAVER: 17 EVENTHUB: 18 SPRINKLR: 19 VEEVALINK: 20 VALKRE: 21 THUB: 22 PTRS: 23 MEDISPEND: 24 PORZIO: 25 sources: - ALLAMERaddressesSource: - countries: - ALL rankSortOrder: Reltio: 1 DCR_SYNC: 2 ONEKEY: 3 IMSO: 4 CS: 5 PFCA: 6 WSR: 7 PFORCERX: 8 PFORCERX_ODS: 8 SAP: 9 SAPVENDOR: 10 LEGACY_SFA_IDL: 11 ENGAGE: 12 MAPP: 13 SEAGEN: 14 GRV: 15 KOL_OneView: 16 ONEMED: 16 GCP: 17 SSE: 18 RX_AUDIT: 19 VEEVALINK: 20 VALKRE: 21 THUB: 22 PTRS: 23 MEDISPEND: 24 PORZIO: 25 sources: - ALLAPACaddressesSource: - countries: - CN rankSortOrder: Reltio: 1 EVR: 2 MDE: 3 FACE: 4 GRV: 5 CN3RDPARTY: 6 PFORCERX: 7 PFORCERX_ODS: 7 KOL_OneView: 8 ONEMED: 8 ENGAGE: 9 MAPP: 10 GCP: 11 SSE: 12 VEEVALINK: 13 THUB: 14 PTRS: 15 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 JPDWH: 3 VOD: 4 PFORCERX: 5 PFORCERX_ODS: 5 SAP: 6 SAPVENDOR: 7 KOL_OneView: 8 ONEMED: 8 ENGAGE: 9 MAPP: 10 SEAGEN: 11 GRV: 12 GCP: 13 SSE: 14 PCMS: 15 WEBINAR: 16 DREAMWEAVER: 17 EVENTHUB: 18 SPRINKLR: 19 VEEVALINK: 20 VALKRE: 21 THUB: 22 PTRS: 23 MEDISPEND: 24 PORZIO: 25 sources: - ALLAddress Type attribute:This map contains AddressType attribute values with appropriate sort numbers, which means e.g. Address Type AT.OFF is higher in the hierarchy than the AddressType AT.MAIL. Based on this configuration, each address will be sorted in the following order:addressType: "[OFF]": 1 "[BUS]": 2 "[DEL]": 3 "[LGL]": 4 "[MAIL]": 5 "[BILL]": 6 "[HOM]": 7 "[UNSP]": 99 Address Status attributeThis map contains Address Status attribute values with appropriate sort numbers, which means e.g. Address Status VALID is higher in the hierarchy than the Address Status ACTV. Based on this configuration, each address will be sorted in the following order:addressStatus: "[AS.VLD]": 1 "[AS.ACTV]": 1   NULL: 90 "[AS.INAC]": 99 "[AS.INVLD]": 99Address rank sort process operates under the following conditions:Each address is sorted with the following rules: sort by Primary affiliation indicator - address related to affiliation with primary usage tag on top, HCP and HCO addresses are compared by fields: AddressType, AddressLine1, AddressLine2, City, StateProvince and Zip5sort by Addresses.Primary attribute - primary addresses on TOP - applicable only for HCO entitiessort by address status Addresses.Status (contains the AddressStatus configuration)sort by the source order number from input source order configuration (lowest rank from the configuration on TOP) - source is taken from the last updated crosswalk Addresses.RefEntity.crosswalks.updateDate once multiple from the same sourcesort by address type (lowest rank from the configuration on TOP) - attribute Addresses.AddressTypesort by Source Rank (lowers rank on TOP) in descending order 1 -> 99 - attribute Addresses.Source.SourceRanksort by COMPANYAddressId (addresses with this attribute are on top) - attribute Addresses.COMPANYAddressIDsort by address label (alphabetically from A to Z)Sorted addresses are recalculated for new Rank each Address Rank is reassigned with an appropriate number from lowest to highest - attribute AddressRankAdditionally:When refRelation.crosswalk.deleteDate exists, then the address is excluded from the sorting processBusiness requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*"
},
{
"title": "Affiliation RankSorter",
"pageID": "164469770",
"pageLink": "/display/GMDM/Affiliation+RankSorter",
"content": "GLOBAL - IQVIA modelThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Workplace provided by source "Reltio" is higher in the hierarchy than the Workplace provided by "CRMMI" source. Based on this configuration, each specialty will be sorted in the following order:affiliation: "Reltio": 1 "EVR": 2 "OK": 3 "AMPCO": 4 "JPDWH": 5 "NUCLEUS": 6 "CMM": 7 "MDE": 8 "LocalMDM": 9 "PFORCERX": 10 "VEEVA_NZ": 11 "VEEVA_AU": 12 "VEEVA_PHARMACY_AU": 13 "CRMMI": 14 "FACE": 15 "KOL_OneView": 16 "GRV": 17 "GCP": 18 "MAPP": 19 "CN3RDPARTY": 20 "Rx_Audit": 21 "PCMS": 22 "CICR": 23The affiliation rank sort process operates under the following conditions:Each workplace is sorted with the following rules:sort by Source Name (lowest rank from the configuration on TOP) - this is calculated based on the Workplace.RefEntity.crosswalks, means that each address is associated with the appropriate crosswalk, and based on the input configuration the order is calculated.sort by LUD (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute Workplace.RefEntity.crosswalks.updateDatesort by Label value alphabetically in ascending order A -> Z - attribute Workplace.labelSorted workplaces are recalculated for the new PrimaryAffiliationIndicator attribute  each Workplace is reassigned with an appropriate value. The winner gets the "true" on the PrimaryAffiliationIndicator. Any looser, if exists is reasigned to "false"Additionally:When refRelation.crosswalk.deleteDate exists, then the workplace is excluded from the sorting processGLOBAL USThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. FacilityType with name "35" is higher in the hierarchy than FacilityType with the name "27". Based on this configuration, each affiliation will be sorted in the following order:facilityType: "35": 1 "MHS": 1 "34": 1 "27": 2Each affiliation before sorting is enriched with the ProviderAffiliation attribute which contains information about HCO because there are attributes that are needed during sorting.Affiliation rank sort process operates under the following conditions:Each affiliation is sorted with the following rulessort by facility type (the lower number is on top) - attribute ClassofTradeN.FacilityTypesort by affiliation confidence code DESC(the higher number or if exists it is on top) - attribute RelationType.AffiliationConfidenceCodesort by staffed beds (if it exists it is higher and higher number on top) - attribute Bed.Type("StaffedBeds").Totalsort by total prescribers (if it exists it is higher and higher number on top) - attribute TotalPrescriberssort by org identifier (if it exists it is higher and if not it compares is as a string) - attribute Identifiers.Type("HCOS_ORG_ID").IDSorted affiliation are recalculated for new Rank - each Affiliation Rank is reassigned with an appropriate number from lowest to highest - attribute RankAffiliation with Rank = "1" is enriched with the UsageTag attribute with the "Primary" value.Additionally:If facility type is not found it is set to 99EMEA/AMER/APACThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Contact Affiliation provided by source "Reltio" is higher in the hierarchy than the Contact Affiliation provided by "ONEKEY" source.  Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each specialty will be sorted in the following order:EMEAaffiliation: - countries: - GB - IE - FK - FR - BL - GP - MF - MQ - NC - PF - PM - RE - TF - WF - ES - DE - IT - VA - SM - TR - RU rankSortOrder: Reltio: 1 ONEKEY: 2 SAP: 3 SAPVENDOR: 4 PFORCERX: 5 PFORCERX_ODS: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 SEAGEN: 9 VALKRE: 10 GRV: 11 GCP: 12 SSE: 13 BIODOSE: 14 BUPA: 15 CH: 16 HCH: 17 CSL: 18 THUB: 19 PTRS: 20 1CKOL: 21 MEDISPEND: 22 VEEVALINK: 23 PORZIO: 24 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 MEDPAGESHCP: 3 MEDPAGESHCO: 3 SAP: 4 SAPVENDOR: 5 PFORCERX: 6 PFORCERX_ODS: 6 KOL_OneView: 7 ONEMED: 7 ENGAGE: 8 MAPP: 9 SEAGEN: 10 VALKRE: 11 GRV: 12 GCP: 13 SSE: 14 SDM: 15 PULSE_KAM: 16 WEBINAR: 17 DREAMWEAVER: 18 EVENTHUB: 19 SPRINKLR: 20 THUB: 21 PTRS: 22 VEEVALINK: 23 MEDISPEND: 24 PORZIO: 25 sources: - ALL AMERaffiliation: - countries: - ALL rankSortOrder: Reltio: 1 DCR_SYNC: 2 ONEKEY: 3 SAP: 4 SAPVENDOR: 5 PFORCERX: 6 PFORCERX_ODS: 6 KOL_OneView: 7 ONEMED: 7 LEGACY_SFA_IDL: 8 ENGAGE: 9 MAPP: 10 SEAGEN: 11 VALKRE: 12 GRV: 13 GCP: 14 SSE: 15 IMSO: 16 CS: 17 PFCA: 18 WSR: 19 THUB: 20 PTRS: 21 RX_AUDIT: 22 VEEVALINK: 23 MEDISPEND: 24 PORZIO: 25 sources: - ALLAPACaffiliation: - countries: - CN rankSortOrder: Reltio: 1 EVR: 2 MDE: 3 FACE: 4 GRV: 5 CN3RDPARTY: 6 GCP: 7 SSE: 8 PFORCERX: 9 PFORCERX_ODS: 9 KOL_OneView: 10 ONEMED: 10 ENGAGE: 11 MAPP: 12 VALKRE: 13 THUB: 14 PTRS: 15 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 JPDWH: 3 VOD: 4 SAP: 5 SAPVENDOR: 6 PFORCERX: 7 PFORCERX_ODS: 7 KOL_OneView: 8 ONEMED: 8 ENGAGE: 9 MAPP: 10 SEAGEN: 11 VALKRE: 12 GRV: 13 GCP: 14 SSE: 15 PCMS: 16 WEBINAR: 17 DREAMWEAVER: 18 EVENTHUB: 19 SPRINKLR: 20 THUB: 21 PTRS: 22 VEEVALINK: 23 MEDISPEND: 24 PORZIO: 25 sources: - ALLThe affiliation rank sort process operates under the following conditions:Each contact affiliation is sorted with the following rules:sort by affiliation status - active on topsort by source prioritysort by source rank - attribute ContactAffiliation.RelationType.Source.SourceRank, ascendingsort by confidence level - attribute ContactAffiliation.RelationType.AffiliationConfidenceCodesort by attribute last updated date - newest at the topsort by Label value alphabetically in ascending order A -> Z - attribute ContactAffiliation.labelSorted contact affiliations are recalculated for the new primary usage tag attribute each contact affiliation is reassigned with an appropriate value. The winner gets the "true" on the primary usage tag.Additionally:When refRelation.crosswalk.deleteDate exists, then the workplace is excluded from the sorting processBusiness requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*"
},
{
"title": "Email RankSorter",
"pageID": "164469768",
"pageLink": "/display/GMDM/Email+RankSorter",
"content": "GLOBAL - IQVIA modelThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Email provided by source "1CKOL" is higher in the hierarchy than Email provided by any other source. Based on this configuration, each email address will be sorted in the following order:email: - countries: - "ALL" sources: - "ALL" rankSortOrder: "1CKOL": 1Email rank sort process operates under the following conditions:Each email is sorted with the following rulesGroup by the TypeIMS attribute and sort each group:sort by source rank (the lower number on top of the one with this attribute)sort by the validation status (VALID value is the winner) - attribute ValidationStatussort by LUD (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute crosswalks.updateDatesort by email value alphabetically in ascending order A -> Z - attribute Email.emailSorted emails are recalculated for the new Rank - each Email Rank is reassigned with an appropriate numberGLOBAL USThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Email provided by source "GRV" is higher in the hierarchy than Email provided by "ONEKEY" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each email address will be sorted in the following order:email: - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio" : 1 "GRV" : 2 "ENGAGE" : 3 "KOL_OneView" : 4 "ONEMED" : 4 "ICUE" : 5 "MAPP" : 6 "ONEKEY" : 7 "SHS" : 8 "VEEVALINK": 9 "SEAGEN": 10 "CENTRIS" : 11 "ASTELAS" : 12 "EMD_SERONO" : 13 "IQVIA_RX" : 14 "IQVIA_RAWDEA" : 15 "COV" : 16 "THUB" : 17 "PTRS" : 18 "SAP" : 19 "SAPVENDOR": 20 "IQVIA_DDD" : 22 "VALKRE": 23 "MEDISPEND" : 24 "PORZIO" : 25Email rank sort process operates under the following conditions:Each email is sorted with the following rulessort by source order (the lower number on top)sort by source rank (the lower number on top of the one with this attribute)Sorted email are recalculated for new Rank - each Email Rank is reassigned with an appropriate numberEMEA/AMER/APACThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Email provided by source "Reltio" is higher in the hierarchy than Email provided by "GCP" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each email address will be sorted in the following order:EMEAemail: - countries: - GB - IE - FK - FR - BL - GP - MF - MQ - NC - PF - PM - RE - TF - WF - ES - DE - IT - VA - SM - TR - RU rankSortOrder: Reltio: 1 1CKOL: 2 GCP: 3 GRV: 4 SSE: 5 ENGAGE: 6 MAPP: 7 VEEVALINK: 8 SEAGEN: 9 KOL_OneView: 10 ONEMED: 10 PFORCERX: 11 PFORCERX_ODS: 11 THUB: 12 PTRS: 13 ONEKEY: 14 SAP: 15 SAPVENDOR: 16 SDM: 17 BIODOSE: 18 BUPA: 19 CH: 20 HCH: 21 CSL: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 GCP: 2 GRV: 3 SSE: 4 ENGAGE: 5 MAPP: 6 VEEVALINK: 7 SEAGEN: 8 KOL_OneView: 9 ONEMED: 9 PULSE_KAM: 10 SPRINKLR: 11 WEBINAR: 12 DREAMWEAVER: 13 EVENTHUB: 14 PFORCERX: 15 PFORCERX_ODS: 15 THUB: 16 PTRS: 17 ONEKEY: 18 MEDPAGESHCP: 19 MEDPAGESHCO: 19 SAP: 20 SAPVENDOR: 21 SDM: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALLAMERemail: - countries: - ALL rankSortOrder: Reltio: 1 DCR_SYNC: 2 GCP: 3 GRV: 4 SSE: 5 ENGAGE: 6 MAPP: 7 VEEVALINK: 8 SEAGEN: 9 KOL_OneView: 10 ONEMED: 10 PFORCERX: 11 PFORCERX_ODS: 11 ONEKEY: 12 IMSO: 13 CS: 14 PFCA: 15 WSR: 16 THUB: 17 PTRS: 18 SAP: 19 SAPVENDOR: 20 LEGACY_SFA_IDL: 21 RX_AUDIT: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALLAPACemail: - countries: - CN rankSortOrder: Reltio: 1 EVR: 2 MDE: 3 FACE: 4 GRV: 5 CN3RDPARTY: 6 ENGAGE: 7 MAPP: 8 VEEVALINK: 9 KOL_OneView: 10 ONEMED: 10 PFORCERX: 11 PFORCERX_ODS: 11 THUB: 12 PTRS: 13 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 JPDWH: 2 PCMS: 3 GCP: 4 GRV: 5 SSE: 6 ENGAGE: 7 MAPP: 8 VEEVALINK: 9 SEAGEN: 10 KOL_OneView: 11 ONEMED: 11 SPRINKLR: 12 WEBINAR: 13 DREAMWEAVER: 14 EVENTHUB: 15 PFORCERX: 16 PFORCERX_ODS: 16 THUB: 17 PTRS: 18 ONEKEY: 19 VOD: 20 SAP: 21 SAPVENDOR: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALLEmail rank sort process operates under the following conditions:Each email is sorted with the following rules sort by cleanser status - valid/invalidsort by source order (the lower number on top)sort by source rank (the lower number on top of the one with this attribute)sort by last updated date - newest at the topsort by email value alphabetically in ascending order A -> Z - attribute Email.labelSorted email are recalculated for new Rank - each Email Rank is reassigned with an appropriate numberBusiness requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*"
},
{
"title": "Identifier RankSorter",
"pageID": "164469766",
"pageLink": "/display/GMDM/Identifier+RankSorter",
"content": "IQVIA Model (Global)AlgorithmThe identifier rank sort process operates under the following conditions:Each Identifier is grouped by Identifier Type: e.g GRV_ID / GCP ID / MI_ID / Physician_Code /. .. each group is sorted separately.Each group is sorted with the following rules:By identifier "Source System order configuration" (lowest rank from the configuration on TOP)By identifier Order (lower ranks on TOP) in descending order 1 -> 99 - attribute OrderBy update date (LUD) (highest LUD date on TOP) in descending order 2017.07 -> 2017.06  - attribute crosswalks.updateDateBy Identifier value (alphabetically in ascending order A -> Z)Sorted identifiers are optionally deduplicated (by Identifier Type in each group) from each group, the lowest in rank and the duplicated identifier is removed. Currently the ( isIgnoreAndRemoveDuplicates = False) is set to False, which means that groups are not deduplicated. Duplicates are removed by Reltio.Sorted identifiers are recalculated for the new Rank each Rank (for each sorted group) is reassigned with an appropriate number from lowest to highest. - attribute - OrderIdentifier rank sort process fallback operates under the following conditions:When Identifier Type is empty each empty identifier is grouped together. Each identifier with an empty type is added to the "EMPTY" group and sorted and DE duplicated separately.During source system from configuration (2.a) sorting when Source system is missing identifier is placed on 99 positionDuring Rank (, 2.b) sorting when the Source system is missing identifier is placed on 99 positionSource Order Configuration This feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Identifier provided by source "Reltio" is higher in the hierarchy than the Identifier provided by the "CRMMI" source. Based on this configuration each identifier will be sorted in the following order:Updated: 2023-12-29EnvironmentGlobal (EX-US)Countries(in environment)CNOthersSource OrderReltio: 1EVR: 2MDE: 3MAPP: 4FACE: 5CRMMI: 6KOL_OneView: 7GRV: 8CN3RDPARTY: 9Reltio: 1EVR: 2OK: 3AMPCO: 4JPDWH: 5NUCLEUS: 6CMM: 7MDE: 8LocalMDM: 9PFORCERX: 10VEEVA_NZ: 11VEEVA_AU: 12VEEVA_PHARMACY_AU: 13CRMMI: 14FACE: 15KOL_OneView: 16GRV: 17GCP: 18MAPP: 19CN3RDPARTY: 20Rx_Audit: 21PCMS: 22CICR: 23COMPANY ModelAlgorithmIdentifier Rank sort algorithm slightly varies from the IQVIA model one:Identifiers are grouped by Type (Identifiers.Type field). Identifiers without a Type count as a separate group.Each group is sorted separately according to following rules:By Trust flag (Identifiers.Trust field). "Yes" takes precedence over "No". If Trust flag is missing, it's as if it was equal to "No".By Source Order (table below). Lowest rank from configuration takes precedence. If a Source is missing in configuration, it gets the lowest possible order (99).By Status (Identifiers.Status). Valid/Active status takes precedence over Invalid/Inactive/missing status. List of status codes is configurable. Currently (2023-12-29), the following codes are configured in all COMPANY environments:Valid codes: [HCPIS.VLD], [HCPIS.ACTV], [HCOIS.VLD], [HCOIS.ACTV]Invalid codes: [HCPIS.INAC], [HCPIS.INVLD], [HCOIS.INAC], [HCOIS.INVLD]By Source Rank (Identifiers.SourceRank field). Lowest rank takes precedence.By LUD. Latest LUD takes precedence. LUD is equal to the highest of 3 dates: providing crosswalk's createDateproviding crosswalk's updateDateproviding crosswalk's singleAttributeUpdateDate for this Identifier (if present)By ID alphabetically. This is a fallback mechanism.Sorted identifiers are recalculated for the new Rank each Rank (for each sorted group) is reassigned with an appropriate number from lowest to highest. - attribute - Rank.Source Order ConfigurationUpdated: 2023-12-29EnvironmentUSAMEREMEAAPACCountries (in environment)ALLALLEU:GBIEFRBLGPMFMQNCPFPMRETFWFESDEITVASMTRRUOthers (AfME)CNOthersSource OrderReltio: 1ONEKEY: 2ICUE: 3ENGAGE: 4KOL_OneView: 5ONEMED: 5GRV: 6SHS: 7IQVIA_RX: 8IQVIA_RAWDEA: 9SEAGEN: 10CENTRIS: 11MAPP: 12ASTELAS: 13EMD_SERONO: 14COV: 15SAP: 16SAPVENDOR: 17IQVIA_DDD: 18PTRS: 19Reltio: 1ONEKEY: 2PFORCERX: 3PFORCERX_ODS: 3KOL_OneView: 4ONEMED: 4LEGACY_SFA_IDL: 5ENGAGE: 6MAPP: 7SEAGEN: 8GRV: 9GCP: 10SSE: 11IMSO: 12CS: 13PFCA: 14SAP: 15SAPVENDOR: 16PTRS: 17RX_AUDIT: 18Reltio: 1ONEKEY: 2PFORCERX: 3PFORCERX_ODS: 3KOL_ONEVIEW: 4ENGAGE: 5MAPP: 6SEAGEN: 7GRV: 8GCP: 9SSE: 101CKOL: 11SAP: 12SAPVENDOR: 13BIODOSE: 14BUPA: 15CH: 16HCH: 17CSL: 18Reltio: 1ONEKEY: 2MEDPAGES: 3MEDPAGESHCP: 3MEDPAGESHCO: 3PFORCERX: 4PFORCERX_ODS: 4KOL_ONEVIEW: 5ENGAGE: 6MAPP: 7SEAGEN: 8GRV: 9GCP: 10SSE: 11PULSE_KAM: 12WEBINAR: 13SAP: 14SAPVENDOR: 15SDM: 16PTRS: 17Reltio: 1EVR: 2MDE: 3FACE: 4GRV: 5CN3RDPARTY: 6GCP: 7PFORCERX: 8PFORCERX_ODS: 8KOL_OneView: 9ONEMED: 9ENGAGE: 10MAPP: 11PTRS: 12Reltio: 1ONEKEY: 2JPDWH: 3VOD: 4PFORCERX: 5PFORCERX_ODS: 5KOL_OneView: 6ONEMED: 6ENGAGE: 7MAPP: 8SEAGEN: 9GRV: 10GCP: 11SSE: 12PCMS: 13PTRS: 14SAP: 15SAPVENDOR: 16Business requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*"
},
{
"title": "OtherHCOtoHCOAffiliations RankSorter",
"pageID": "319291956",
"pageLink": "/display/GMDM/OtherHCOtoHCOAffiliations+RankSorter",
"content": "APAC COMPANY (currently for AU and NZ)Business requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*The functionality is configured in the callback delay service. Allows you to set different types of sorting for each country. The configuration for AU and NZ is shown below.rankSortOrder: affiliation: - countries: - AU - NZ rankExecutionOrder: - type: ATTRIBUTE attributeName: RelationType/RelationshipDescription lookupCode: true order: REL.HIE: 1 REL.MAI: 2 REL.FPA: 3 REL.BNG: 4 REL.BUY: 5 REL.PHN: 6 REL.GPR: 7 REL.MBR: 8 REL.REM: 9 REL.GPSS: 10 REL.WPC: 11 REL.WPIC: 12 REL.DOU: 13 - type: ACTIVE - type: SOURCE order: Reltio: 1 ONEKEY: 2 JPDWH: 3 SAP: 4 PFORCERX: 5 PFORCERX_ODS: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 GRV: 9 GCP: 10 SSE: 11 PCMS: 12 PTRS: 13 - type: LUDRelationships are grouped by endObjectId, then the whole bundle is sorted and ranked. The relationship's position on the list (its rank) for AU and NZ is calculated based on the following algorithm:sorting by RelationshipDescription attribute  - relationship with REL.HIE value on topsorting by relationship activity - active at the topsort by source position - Reltio source on topsort by LUD (newest on top)"
},
{
"title": "Phone RankSorter",
"pageID": "164469748",
"pageLink": "/display/GMDM/Phone+RankSorter",
"content": "GLOBAL - IQVIA modelThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Phones provided by source "Reltio" is higher in the hierarchy than the Address provided by "EVR" source. Based on this configuration, each phonewill be sorted in the following order:phone: - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio": 1 "EVR": 2 "OK": 3 "AMPCO": 4 "JPDWH": 5 "NUCLEUS": 6 "CMM": 7 "MDE": 8 "LocalMDM": 9 "PFORCERX": 10 "VEEVA_NZ": 11 "VEEVA_AU": 12 "VEEVA_PHARMACY_AU": 13 "CRMMI": 14 "FACE": 15 "KOL_OneView": 16 "GRV": 17 "GCP": 18 "MAPP": 19 "CN3RDPARTY": 20 "Rx_Audit": 21 "PCMS": 22 "CICR": 23Phone rank sort process operates under the following conditions:Each phone is sorted with the following rulesGroup by the TypeIMS attribute and sort each group:sort by "Source System order configuration" (lowest rank from the configuration on TOP)sort by source rank (the lower number on top of the one with this attribute)sort by the validation status (VALID value is the winner) - attribute ValidationStatussort by LUD (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute crosswalks.updateDatesort by number value alphabetically in ascending order A -> Z - attribute Phone.numberSorted phones are recalculated for the new Rank - each Phone Rank is reassigned with an appropriate numberGLOBAL USThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Phone provided by source "ONEKEY" is higher in the hierarchy than the Phone provided by "ENGAGE" source.  Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each phone number will be sorted in the following order:phone: - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio" : 1 "ONEKEY" : 2 "ICUE" : 3 "VEEVALINK" : 4 "ENGAGE" : 5 "KOL_OneView" : 6 "ONEMED" : 6 "GRV" : 7 "SHS" : 8 "IQVIA_RX" : 9 "IQVIA_RAWDEA" : 10 "SEAGEN": 11 "CENTRIS" : 12 "MAPP" : 13 "ASTELAS" : 14 "EMD_SERONO" : 15 "COV" : 16 "SAP" : 17 "SAPVENDOR": 18 "IQVIA_DDD" : 19 "VALKRE" : 20 "THUB" : 21 "PTRS" : 22 "MEDISPEND" : 23 "PORZIO" : 24Phone number rank sort process operates under the following conditions:Each phone number is sorted with the following rules, on top, it is grouped by type.Group by the Type attribute and sort each group sort by source order (the lower number on top) - source name is taken from the last updated crosswalk for this Phone attributesort by source rank (the lower number on top or the one with this attribute) - attribute Source.SourceRank for this Phone attributeSorted phone numbers are recalculated for new Rank - each Phone Rank is reassigned with an appropriate number - attribute Rank for Phone attributeEMEA/AMER/APACThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Phone provided by source "ONEKEY" is higher in the hierarchy than the Phone provided by "ENGAGE" source.  Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each phone number will be sorted in the following order:EMEAphone: - countries: - GB - IE - FK - FR - BL - GP - MF - MQ - NC - PF - PM - RE - TF - WF - ES - DE - IT - VA - SM - TR - RU rankSortOrder: Reltio: 1 ONEKEY: 2 PFORCERX: 3 PFORCERX_ODS: 3 VEEVALINK: 4 KOL_OneView: 5 ONEMED: 5 ENGAGE: 6 MAPP: 7 SEAGEN: 8 GRV: 9 GCP: 10 SSE: 11 1CKOL: 12 THUB: 13 PTRS: 14 SAP: 15 SAPVENDOR: 16 BIODOSE: 17 BUPA: 18 CH: 19 HCH: 20 CSL: 21 MEDISPEND: 22 PORZIO: 23 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 MEDPAGESHCP: 3 MEDPAGESHCO: 3 PFORCERX: 4 PFORCERX_ODS: 4 VEEVALINK: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 SEAGEN: 9 GRV: 10 GCP: 11 SSE: 12 PULSE_KAM: 13 SPRINKLR: 14 WEBINAR: 15 DREAMWEAVER: 16 EVENTHUB: 17 SAP: 18 SAPVENDOR: 19 SDM: 20 THUB: 21 PTRS: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALLAMERphone: - countries: - ALL rankSortOrder: Reltio: 1 DCR_SYNC: 2 ONEKEY: 3 PFORCERX: 4 PFORCERX_ODS: 4 VEEVALINK: 5 KOL_OneView: 6 ONEMED: 6 LEGACY_SFA_IDL: 7 ENGAGE: 8 MAPP: 8 SEAGEN: 9 GRV: 10 GCP: 11 SSE: 12 IMSO: 13 CS: 14 PFCA: 15 WSR: 16 SAP: 17 SAPVENDOR: 18 THUB: 19 PTRS: 20 RX_AUDIT: 21 MEDISPEND: 22 PORZIO: 23 sources: - ALLAPACphone: - countries: - CN rankSortOrder: Reltio: 1 EVR: 2 MDE: 3 FACE: 4 GRV: 5 CN3RDPARTY: 6 GCP: 7 PFORCERX: 8 PFORCERX_ODS: 8 VEEVALINK: 9 KOL_OneView: 10 ONEMED: 10 ENGAGE: 11 MAPP: 12 PTRS: 13 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 JPDWH: 3 VOD: 4 PFORCERX: 5 PFORCERX_ODS: 5 VEEVALINK: 6 KOL_OneView: 7 ONEMED: 7 ENGAGE: 8 MAPP: 9 SEAGEN: 10 GRV: 11 GCP: 12 SSE: 13 PCMS: 14 THUB: 15 PTRS: 16 SAP: 17 SAPVENDOR: 18 SPRINKLR: 19 WEBINAR: 20 DREAMWEAVER: 21 EVENTHUB: 22 MEDISPEND: 23 PORZIO: 24 sources: - ALLPhone number rank sort process operates under the following conditions:Each phone number is sorted with the following rules, on top, it is grouped by type.Group by the Type attribute and sort each group  sort by cleanser status - valid/invalidsort by source order (the lower number on top) - source name is taken from the last updated crosswalk for this Phone attributesort by source rank (the lower number on top or the one with this attribute) - attribute Source.SourceRank for this Phone attributelast update date - newest to oldestsort by label - alphabetical order A-ZSorted phone numbers are recalculated for new Rank - each Phone Rank is reassigned with an appropriate number - attribute Rank for Phone attributeBusiness requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*"
},
{
"title": "Speaker RankSorter",
"pageID": "337862629",
"pageLink": "/display/GMDM/Speaker+RankSorter",
"content": "DescriptionUnlike other RankSorters, Speaker Rank is expressed not by a nested "Rank" or "Order" field, but by the "ignore" flag."Ignore" flag sets the attribute's "ov" to false. By operating this flag, we assure that only the most valuable attribute is visible and sent downstream from Hub.AlgorithmSort all Speaker nestsSort by source hierarchyIf same source, sort by Last Update Date (higher of crosswalk.updateDate / crosswalk.singleAttributeUpdateDates/{speaker attribute uri})If same source and LUD, sort by attribute URI (fallback strategy)Process sorted groupIf first Speaker nest has ignored == true, set ignored := false for that nestIf every next Speaker nest does not have ignored == true, set ignored := true for that nestPost the list of changes to Manager's async interface using Kafka topicGlobal - IQVIA ModelSpeaker RankSorter is active only for China. Source hierarchy is as follows:speaker: "Reltio": 1 "MAPP": 2 "FACE": 3 "EVR": 4 "MDE": 5 "CRMMI": 6 "KOL_OneView": 7 "GRV": 8 "CN3RDPARTY": 9Specific ConfigurationUnlike other PreCallback flows, Speaker RankSorter requires both ov=true and ov=false attribute values to work correctly.This is why:Entity Enricher configuration must be altered, to enrich entities with ov&nonOv values of "Speaker" attribute:\nbundle:\n nonOv: false\n ov: false\n nonOvAttributesToInclude:\n - "Speaker"\nPreCallback Service configuration must be altered to assure that nonOv values are cleaned from the event before passing it further down to the Event Publisher\ncleanOvFalseAttributeValues:\n - "Speaker"\nBusiness requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*"
},
{
"title": "Specialty RankSorter",
"pageID": "164469746",
"pageLink": "/display/GMDM/Specialty+RankSorter",
"content": "GLOBAL - IQVIA modelThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Specialty provided by source "Reltio" is higher in the hierarchy than the Specialty provided by the "CRMMI" source. Additionally, for Specialities, there is a difference between countries. The configuration for RU and TD contains only 4 sources and is different than the base configuration. Based on this configuration each specialty will be sorted in the following order:specialities: - countries: - "RU" - "TR" sources: - "ALL" rankSortOrder: "GRV": 1 "GCP": 2 "OK": 3 "KOL_OneView": 4 - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio": 1 "EVR": 2 "OK": 3 "AMPCO": 4 "JPDWH": 5 "NUCLEUS": 6 "CMM": 7 "MDE": 8 "LocalMDM": 9 "PFORCERX": 10 "VEEVA_NZ": 11 "VEEVA_AU": 12 "VEEVA_PHARMACY_AU": 13 "CRMMI": 14 "FACE": 15 "KOL_OneView": 16 "GRV": 17 "GCP": 18 "MAPP": 19 "CN3RDPARTY": 20 "Rx_Audit": 21 "PCMS": 22 "CICR": 23The specialty rank sort process operates under the following conditions:Each Specialty is grouped by Specialty Type: SPEC/TEND/QUAL/EDUC each group is sorted separately.Each group is sorted with the following rules:By specialty "Source System order configuration" (lowest rank from the configuration on TOP)By specialty Rank (lower ranks on TOP) in descending order 1 -> 99By update date (LUD) (highest LUD date on TOP) in descending order 2017.07 -> 2017.06 - attribute crosswalks.updateDateBy Specialty Value (alphabetically in ascending order A -> Z)Sorted specialties are optionally deduplicated (by Specialty Type in each group) from each group, the lowest in rank and the duplicated specialty is removed. Currently the ( isIgnoreAndRemoveDuplicates = False) is set to False, which means that groups are not deduplicated. Duplicates are removed by Reltio.Sorted specialties are recalculated for the new Ranks each Rank (for each sorted group) is reassigned with an appropriate number from lowest to highest.Additionally, for the Specialty Rank = 1 the best record is set to true - attribute - PrimarySpecialtyFlagSpecialty rank sort process fallback operates under the following conditions:When Specialty Type is empty each empty specialty is grouped together. Each specialty with an empty type is added to the "EMPTY" group and sorted and DE duplicated separately.During source system from configuration (2.a) sorting when Source system is missing specialty is placed on 99 positionDuring Rank (, 2.b) sorting when the Source system is missing specialty is placed on 99 positionGLOBAL USThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Speciality provided by source "ONEKEY" is higher in the hierarchy than the Speciality provided by the "ENGAGE" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each Speciality will be sorted in the following order:specialities: - countries: - "ALL" sources: - "ALL" rankSortOrder: "Reltio" : 1 "ONEKEY" : 2 "IQVIA_RAWDEA" : 3 "VEEVALINK" : 4 "ENGAGE" : 5 "KOL_OneView" : 6 "ONEMED" : 6 "SPEAKER" : 7 "ICUE" : 8 "SHS" : 9 "IQVIA_RX" : 10 "SEAGEN": 11 "CENTRIS" : 12 "ASTELAS" : 13 "EMD_SERONO" : 14 "MAPP" : 15 "GRV" : 16 "THUB" : 17 "PTRS" : 18 "VALKRE" : 19 "MEDISPEND" : 20 "PORZIO" : 21The specialty rank sort process operates under the following conditions:Specialty is sorted with the following rules, but on the top, it is grouped by Speciality.SpecialityType attribute:Group by Speciality.SpecialityType attribute and sort each group: sort by specialty unspecified status value (higher value on the top) - attribute Specialty with value Unspecifiedsort by source order number (the lower number on the top) - source name is taken from crosswalk that was last updatedsort by source rank (the lower on the top) - attribute Source.SourceRanksort by last update date (the earliest on the top) - last update date is taken from lately updated crosswalksort by specialty attribute value (string comparison) - attribute SpecialtySorted specialties are recalculated for new Rank - each Specialty Rank is reassigned with an appropriate number - attribute RankAdditionally:If the source is not found it is set to 99If specialty unspecified attribute name or value is not set it is set to 99EMEA/AMER/APACThis feature requires the following configuration. This map contains sources with appropriate sort numbers, which means e.g. Speciality provided by source "ONEKEY" is higher in the hierarchy than the Speciality provided by the "ENGAGE" source. Configuration is divided by country and source lists, for which this order is applicable. Based on this configuration, each Speciality will be sorted in the following order:EMEAspecialities: - countries: - GB - IE - FK - FR - BL - GP - MF - MQ - NC - PF - PM - RE - TF - WF - ES - DE - IT - VA - SM - TR - RU rankSortOrder: Reltio: 1 ONEKEY: 2 PFORCERX: 3 PFORCERX_ODS: 3 VEEVALINK: 4 KOL_OneView: 5 ONEMED: 5 ENGAGE: 6 MAPP: 7 SEAGEN: 8 GRV: 9 GCP: 10 SSE: 11 THUB: 12 PTRS: 13 1CKOL: 14 MEDISPEND: 15 PORZIO: 16 sources: - ALL - countries: - ALL sources: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 MEDPAGESHCP: 3 MEDPAGESHCO: 3 PFORCERX: 4 PFORCERX_ODS: 4 VEEVALINK: 5 KOL_OneView: 6 ONEMED: 6 ENGAGE: 7 MAPP: 8 SEAGEN: 9 GRV: 10 GCP: 11 SSE: 12 PULSE_KAM: 13 WEBINAR: 14 DREAMWEAVER: 15 EVENTHUB: 16 SPRINKLR: 17 THUB: 18 PTRS: 19 MEDISPEND: 20 PORZIO: 21AMERspecialities: - countries: - ALL rankSortOrder: Reltio: 1 DCR_SYNC: 2 ONEKEY: 3 PFORCERX: 4 PFORCERX_ODS: 4 VEEVALINK: 5 KOL_OneView: 6 ONEMED: 6 LEGACY_SFA_IDL: 7 ENGAGE: 8 MAPP: 9 SEAGEN: 10 GRV: 11 GCP: 12 SSE: 13 THUB: 14 PTRS: 15 RX_AUDIT: 16 PFCA: 17 WSR: 18 MEDISPEND: 19 PORZIO: 20 sources: - ALLAPACspecialities: - countries: - CN rankSortOrder: Reltio: 1 EVR: 2 MDE: 3 FACE: 4 GRV: 5 CN3RDPARTY: 6 GCP: 7 SSE: 8 PFORCERX: 9 PFORCERX_ODS: 9 VEEVALINK: 10 KOL_OneView: 11 ONEMED: 11 ENGAGE: 12 MAPP: 13 THUB: 14 PTRS: 15 sources: - ALL - countries: - ALL rankSortOrder: Reltio: 1 ONEKEY: 2 JPDWH: 3 VOD: 4 PFORCERX: 5 PFORCERX_ODS: 5 VEEVALINK: 6 KOL_OneView: 7 ONEMED: 7 ENGAGE: 8 MAPP: 9 SEAGEN: 10 GRV: 11 GCP: 12 SSE: 13 PCMS: 14 WEBINAR: 15 DREAMWEAVER: 16 EVENTHUB: 17 SPRINKLR: 18 THUB: 19 PTRS: 20 MEDISPEND: 21 PORZIO: 22 sources: - ALLThe specialty rank sort process operates under the following conditions:Specialty is sorted with the following rules, but on the top, it is grouped by Speciality.SpecialityType attribute:Group by Speciality.SpecialityType attribute and sort each group: sort by specialty unspecified status value (higher value on the top) - attribute Specialty with value Unspecifiedsort by source order number (the lower number on the top) - source name is taken from crosswalk that was last updatedsort by source rank (the lower on the top) - attribute Source.SourceRanksort by last update date (the earliest on the top) - last update date is taken from lately updated crosswalksort by specialty attribute value (string comparison) - attribute SpecialtySorted specialties are recalculated for new Rank - each Specialty Rank is reassigned with an appropriate number - attribute Rank. The primary flag is set for the top ranked specialty.Additionally:If the source is not found it is set to 99If specialty unspecified attribute name or value is not set it is set to 99Business requirements (provided by AJ)COMPANY Teams → BM3.3 for MDM → Design Documents → MDM Hub → Global-MDM_DQ_*"
},
{
"title": "Enricher Processor",
"pageID": "302687243",
"pageLink": "/display/GMDM/Enricher+Processor",
"content": "EnricherProcessor is the first PreCallback processor applied to incoming events. It enriches reference attributes with refEntity attributes, for the Rank calculation purposes. Usually, enriched attributes are removed after applying all PreCallbacks - this is configurable using cleanAdditionalRefAttributes flag. The only exception is GBL (EX-US), where attributes remain for CN. Removing "borrowed" attributes is carried out by the Cleaner Processor.AlgorithmFor targetEntity:Find reference attributes matching configurationFor each such attribute:Walk the relation to get endObject entityFetch endObject entity's current state through Manager (using cache)Rewrite entity's attributes to this reference attribute, inserting them in <Attribute>.refEntity.attributes pathsteps a-b are applied recursively, according to configured maxDepth.ExampleBelow is EnricherProcessor config from APAC PROD's Precallback Service:\nrefLookupConfig:\n - cleanAdditionalRefAttributes: true\n country:\n - AU\n - IN\n - JP\n - KR\n - NZ\n entities:\n - attributes:\n - ContactAffiliations\n type: HCP\n maxDepth: 2\nHow to read the config:for entities with Country: Australia, India, Japan, South Korea or New Zealand,of entity type HCP,enrich ContactAffiliations, so that it contains refEntity's attributes as sub-attributes,do that with depth 2 - so simply take HCO's attributes and insert them into ContactAffiliations.refEntity.attributes,after all calculations have finished, remove "borrowed" attributes, so that event passed to Event Publisher does not have them."
},
{
"title": "Cleaner Processor",
"pageID": "302687603",
"pageLink": "/display/GMDM/Cleaner+Processor",
"content": "Cleaner Processor removed attributes enriched by the Enricher Processor. It is one of the last processors in the Precallback Service's execution order. Processor checks the cleanAdditionalRefAttributes flag in config.AlgorithmFor targetEntity:Find all refLookupConfig entries applicable for this Country.For all attributes in found entries, remove refEntity.attributes map."
},
{
"title": "Inactivation Generator",
"pageID": "302697554",
"pageLink": "/display/GMDM/Inactivation+Generator",
"content": "Inactivation Generator is one of Precallback Service's event Processors. It checks input event's targetEntity and changes event type to INACTIVATED, if it detects one of below:for entities:targetEntity's endDate is set,for relations: targetRelation's endDate is set,targetRelation's startRefIgnored == true,targetRelation's endRefIgnored == true.AlgorithmFor each event:If targetEntity not null and targetEntity.endDate is null, skip event,If targetRelation not null:If targetRelation.endDate is null or targetRelation.startRefIgnored is null or targetRelation.endRefIgnored is null, skip event,Search the mapping for adequate output event type, according to table below. If no match found, skip event,Inbound event typeOutbound event typeHCP_CREATEDHCP_INACTIVATEDHCP_CHANGEDHCO_CREATEDHCO_INACTIVATEDHCO_CHANGEDMCO_CREATEDMCO_INACTIVATEDMCO_CHANGEDRELATIONSHIP_CREATEDRELATIONSHIP_INACTIVATEDRELATIONSHIP_CHANGEDReturn same event with new event type, according to table above."
},
{
"title": "MultiMerge Processor",
"pageID": "302697588",
"pageLink": "/display/GMDM/MultiMerge+Processor",
"content": "MultiMerge Processor is one of Precallback Service's event Processors.For MERGED events, it checks if targetEntity.uri is equal to first URI from entitiesURIs. If it is different, entitiesURIs is adjusted, by inserting targetEntity.uri in the beginning. This is to assure, that targetEntity.uri[0] always contains a merge winner, even in cases of multiple merges.AlgorithmFor each event of type:HCP_MERGED,HCO_MERGED,MCO_MERGED,do:if targetEntity.uri is null, skip event,if entitiesURIs[0] and targetEntity.uri are equal, skip event,insert targetEntity.uri at the beginning of entitiesURIs and return the event."
},
{
"title": "OtherHCOtoHCOAffiliations Rankings",
"pageID": "319291954",
"pageLink": "/display/GMDM/OtherHCOtoHCOAffiliations+Rankings",
"content": "DescriptionThe process was designed to rank OtherHCOtoHCOAffiliation with rules that are specific to the country. The current configuration contains Activator and Rankers available for AU and NZ countries and the OtherHCOtoHCOAffiliationsType. The process (compared to the ContactAffilaitions) was designed to process RELATIONSHIP_CHANGE events, which are single events that contain one piece of information about specific relation. The process builds the cache with the hierarchy of objects when the main object is Reltio EndObject (The direction that we check and implement the Rankins: (child)END_OBJECT -> START_OBJECT(parent).  Change in the relation is not generating the HCO_CHANGE events so we need to check relations events. Relation change/create/remove events may change the hierarchy and ranking order.Comparing this to the ContactAffiliations ranking logic, change on HCP object had whole information about the whole hierarchy in one event, this caused we could count and generate events based on HCP CHANGE.This new logic builds this hierarchy based on RELATIONSHIP events, compact the changes in the time window, and generates events after aggregation to limit the number of changes in REltio and API calls. DATA VERIFICATION:Snowflake queries:\nSELECT COUNT(*) FROM (\n\nSELECT END_ENTITY_URI, COUNT(*) FROM COMM_APAC_MDM_DMART_PROD_DB.CUSTOMER_SL.MDM_RELATIONS\n\nWHERE COUNTRY = 'AU' and RELATION_TYPE ='OtherHCOtoHCOAffiliations' and ACTIVE = TRUE\n\nGROUP BY END_ENTITY_URI\n\n)\n\n\n\n\nSELECT COUNT(*) FROM COMM_APAC_MDM_DMART_PROD_DB.CUSTOMER_SL.MDM_ENTITIES\n\nWHERE ENTITY_TYPE='HCO' and COUNTRY ='AU' AND ACTIVE = TRUE\n\nSELECT COUNT(*) FROM (\n\nSELECT END_ENTITY_URI, COUNT(*) FROM COMM_APAC_MDM_DMART_PROD_DB.CUSTOMER_SL.MDM_RELATIONS\n\nWHERE COUNTRY = 'NZ' and RELATION_TYPE ='OtherHCOtoHCOAffiliations' and ACTIVE = TRUE\n\nGROUP BY END_ENTITY_URI\n\n)\nExample few cases from APAC QA:010Xcxi NZ          200zxT2O              NZ          2008NxIA              NZ          21CVfmxOm        NZ          2VCMuTvz            NZ          2cvoyNhG             NZ          2VCMnOvP          NZ          200yZOis                NZ          200JoRnN              NZ          2\nSELECT END_ENTITY_URI, COUNTRY, COUNT(*) AS count FROM CUSTOMER_SL.MDM_RELATIONS\n\nWHERE RELATION_TYPE ='OtherHCOtoHCOAffiliations' AND ACTIVE = TRUE\n\nAND COUNTRY IN ('AU','NZ')\n\nGROUP BY END_ENTITY_URI, COUNTRY\n\nORDER BY count DESC\nCq2pWio             AU          500KcdEA              AU          3T5NxyUa             AU          3ZsTdYcS               AU          3XhGoqwo           AU          300wMWdy         AU          3Cq1wjj8               AU          3The direction that we should check and implement the Rankins:(child)END_OBJECT -> START_OBJECT(parent)We are starting with Child objects and checking if this child is connected to multiple parents and we are ranking. In most cases, 99% of these will be one relation that will auto-filled with rank=1 during load. If not we are going to rank this using below implementation:Example:https://mpe-02.reltio.com/nui/xs4oRCXpCKewNDK/profile?entityUri=entities%2F00KcdEAREQUIREMENTS:Flow diagramLogical ArchitecturePreDelayCallback LogicStepsOverview Reltio attributes\nATTRIBUTES TO UPDATE/INSERT\nRANK\n {\n "label": "Rank",\n "name": "Rank",\n "description": "Rank",\n "type": "Int",\n "hidden": false,\n "important": false,\n "system": false,\n "required": false,\n "faceted": true,\n "searchable": true,\n "attributeOrdering": {\n "orderType": "ASC",\n "orderingStrategy": "LUD"\n },\n "uri": "configuration/relationTypes/OtherHCOtoHCOAffiliations/attributes/Rank",\n "skipInDataAccess": false\n },\nPreCallback Logic - RANK ActivatorDelayRankActivationProcessor:The purpose of this activator is to pick specific events and push them to delay-events topics, events from this topic will be ranked using the algorithm described on this page (OtherHCOtoHCOAffiliations Rankings), the flow is also described below.Logic:Check the activation criteria, when true process the event to the delay topic, otherwise, push the main event as is to proc-events topic to next HUB processing phase (event publishing)When all activation criteria are met:acceptedEventTypes events are RELATION types from the listacceptedRelationObjectTypes the event is relation type and is the type specified OtherHCOToHCOacceptedCountries relation is from a specified countryDo:pick the eventscopy the main event to the delayedEventsclear the mainEvents (do not push events to next publishing phase)Before sending apply the additionalFunctions (specify the interface/process and run all selected)Here change the Kafka Key and put the relation.endObject.objectURI as a RELATION event key.Example configuration for AU and NZ:delayRankActivationCallback: featureActivation: true activators: - description: "Delay OtherHCOtoHCOAffiliations RELATION events from AU and NZ country to calculate Rank in delay service" acceptedEventTypes: - RELATIONSHIP_CHANGED - RELATIONSHIP_CREATED - RELATIONSHIP_REMOVED - RELATIONSHIP_INACTIVATED acceptedRelationObjectTypes: - configuration/relationTypes/OtherHCOtoHCOAffiliations acceptedCountries: - AU - NZ additionalFunctions: - RelationEndObjectAsKafkaKeyPreDelayCallback - RANK LogicThe purpose of this pre-delay-callback service is to Rank specific objects (currently available OtherHCOToHCO ranking for AU and NZ - OtherHCOtoHCOAffiliations Rankings)CallbackWithDelay and CurrentStateCache advantages:The cache is build on the fly based on Mongo (one-time GET of each end Object) and enriched by events during a lifetime - logic is in KafkaStreams and we are using State store in KafkaStreams.(optional) Model change (re-ranking) will cause the cache removal and regeneration of events cache will be rebuilt with a new model so in case of future changes we can re-rank based on new rules.The cache contains only required attributes and is updated in real-timeIn most cases it will happen that the relations are in sync so no changes will be pushed to the delay-events topic everything will be pushed in real-time to target systems (Snowflake)In case of any change in any relation, we will aggregate all relations by the EndObjectId. This allows us to emit an aggregation window one time for each EndObject so that changes are generated for one entity in one run. It may also happen that one new relation is re-ranking whole objects hierarchy. Using this logic one event goes to the Delay logic, one event triggers the difference comparison and generation of multiple updates. These updates (after Reltio publishing) will go to the PreDelay state and we are going to check if the data is in sync and if we generated all events. In that case, all events should flow to proc-events and to SnowflakeWe set a 1h window to aggregate multiple changes (relationship updates) and emit windows in 1h intervals.Snowflake is refreshed on PROD in 2h windows - we fit into this so that all events are ready and do not contain the partial state in ate (but Snowflake it may happen in some edge cases) The advantage of this solution is that all RELATIONS will have Rank in Snowflake, so there will be no state without Rank.Logic: PreDelayPoll event from internal-reltio-full-delay-eventsFor each Active rank sorter (currently OtherHCOToHCO) execute the logicWe need a state store that will contain the RelationData  cache of all relation hierarchies.The event key that will be moved here will be endObjectId so that all events related to the specific end object will be on one partition so that we will ask to mongo one time (no parallelism by endObjectId)Check if “CurrentStateCache” contains the state for endObjectIdIf not execute GetRelationsByEndObjectId (This returns a list of relations)Transform the output to the CurrentStateCache modelIf exists update (join) the current Relation to CurrentStateCache by endObject and update relations KeyValue MapCheck if Relation Rank is in sync with SortedState and if true we are going to push such event to outputTopic (reltio-proc-events)execute function isRelationRankInSyncWithCurrentSortedState (Relation, CurrentStateCache)If Relation.Rank ==null -> falseIf Relatio.Rank !=nullSort CurrentStateCacheCheck if RelatioID Rank is the same as SortedStateCache (it means we need to check if the current Relation Rank is correct)If the function returns true publish the Relationship event to OUTPUT TOPIC Push events with Kafka Key equal to the relation (reverse logic of - RelationEndObjectAsKafkaKey)If the function returns false go to Delay stepPush event (end object id) to ${env}-internal-reltio-full-callback-delay-eventsDelayAggregate all events in the time window (configurable) by end object IDNOTE check the closing window for a selected key after the inactivity period extend the window for the selected key if a new event is in. To save space in the delay/suppress window store only endObjectIDsPostDelayWhen the aggregation window is closed do:Execute the activation function.Sort(CurrentState) check the whole hierarchy and sort the state to a desired stateThe result of this function is ArrayList of AttributeChanges related to the relations that have to be updated.As a result, push all events to bulk-callback topics that will cause an update in Reltio.Data Model and Configuration\nRelationData cache model:\n[\n Id: endObjectId\n relations:\n     - relationUri: relations/13pTXPR0\n       endObjectUri: endObjectId"      \n          country: AU \n         crosswalks:\n - type: ONEKEY\n value: WSK123sdcF\n deleteDate: 123324521243\n RankUri: e.g. relations/13pTXPR0/attributes/Rank\n Rank: null\n \t Attributes:\n Status:\n \t - ACTIVE                     \n        RelationType/RelationshipDescription:\n - REL.MAI\n - REL.CON\n\n]\n\n\nTriggersRankActivationTrigger actionComponentActionDefault timeIN Events incoming Callback Service: Pre-Callback: DelayRankActivationProcessor$env-internal-reltio-full-eventsFull events trigger pre-callback stream and the activation logic that will route the events to next processing staterealtime - events streamOUT Activated events to be sortedCallback Service: Pre-Callback: DelayRankActivationProcessor $env-internal-reltio-full-delay-eventsOutput topicrealtime - events streamTrigger actionComponentActionDefault timeIN Events incoming mdm-callback-delay-service: Pre-Delay-Callback: PreCallbackDelayStream$env-internal-reltio-full-delay-eventsDELAY: ${env}-internal-reltio-full-callback-delay-eventsFull events trigger pre-delay-callback stream and the ranking logicrealtime - events streamOUT Sorted events with the correct state mdm-callback-delay-service: Pre-Delay-Callback: PreCallbackDelayStream$env-internal-reltio-proc-eventsOutput topic with correct eventsrealtime - events streamOUT Reltio Updatesmdm-callback-delay-service: Pre-Delay-Callback: PostCallbackStream$env-internal-async-all-bulk-callbacksOutput topic with Reltio updatesrealtime - events streamDependent componentsComponentUsageCallback ServiceRELATION ranking activator that push events to delay serviceCallback Delay ServiceMain Service with OtherHCOtoHCOAffiliations Rankings logicEntity EnricherGenerates incoming events full eventsManagerProcess callbacks generated by this serviceAttachment docs with more technical implementation details:example-reqeusts.json"
},
{
"title": "HCPType Callback",
"pageID": "347637202",
"pageLink": "/display/GMDM/HCPType+Callback",
"content": "DescriptionThe process was designed to update HCPType RDM code in TypeCode attribute on HCP profiles. The process is based on the events streaming, the main event is recalculated based on the current state and during comparison of existing TypeCode on Profile and calculated value the callback is generated. This process (like all processes in PreCallback Service) blocks the main event and will send the update to external clients only when the update is visible in Reltio and TypeCode contains correct code. The process uses the RDM as a internal cache and calculates the output value based on current mapping. To limit the number of requests to RDM we are using the internal Mongo Cache and we refresh this cache every 2 hours on PROD. Additionally we designed the in-memory cache to store 2 required codes (PRES/NON-PRESC) with HUB_CALLBACK source code values.This logic is related to these 2 values in Reltio HCP profiles:Type-  Prescriber (HCPT.PRES)Type - Non-Prescriber (HCPT.NPRS)Why this process was designed:With the addition of the Eastern Cluster LOVs, we have hit the limit/issue where HCP Type Prescriber & Non-Prescriber canonical codes no longer into RDM.Issue is a size limit in RDMs underlying GCP tech stack It is a GCP physical limitation and cannot be increased. We cannot add new RDM codes to PRES/NON-PRESC codes and this will cause issues in HCP data.The previous logic:In the ingestion service layer (all API calls) there was a DQ rule called “HCP TypeCode”. This logic adds the TypeCode as a concatenation of SubTypeCode and Speciality Ranked 1. Logic get source code and puts the concatenation in TypeCode attribute. The number of combination on source codes is reaching the limit so we are building new logic.For future reference adding old DQ rules that will be removed after we deploy the new process.DQ rules (sort rank):- name: Sort specialities by source rank category: OTHER createdDate: 20-10-2022 modifiedDate: 20-10-2022 preconditions: - type: operationType values: - create - update - type: not preconditions: - type: source values: - HUB_CALLBACK - NUCLEUS - LEGACYMDM - PFORCERX_ID - type: not preconditions: - type: match attribute: TypeCode values: - "^.+$" action: type: sort key: Specialities sorter: SourceRankSorterDQ rules (add sub type code):- name: Autofill sub type code when sub type is null/empty category: AUTOFILL_BASE createdDate: 20-10-2022 modifiedDate: 20-10-2022 preconditions: - type: operationType values: - create - update - type: not preconditions: - type: source values: - HUB_CALLBACK - NUCLEUS - LEGACYMDM - PFORCERX_ID - KOL_OneView action: type: modify attributes: - TypeCode value: "{SubTypeCode}-{Specialities.Specialty}" replaceNulls: true when: - "" - "NULL"Example of previous input values:attributes: "TypeCode": [ { "value": "TYP.M-SP.WDE.04" } ]TYP.M is a SubTypeCodeSP.WDE.04 is a Specialitycalucated value - PRESC:As we can see on this screenshot on EMEA PROD there are 2920 combinations for one ONEKEY source that generates PRESC value. The new logic:The new logic was designed in pre callback service in hybrid mode. The logic uses the same assumptions like are made in previous version, but instead we are using Reltio Canonical codes, and this limits the number of combinations. We are providing this value using only one Source HUB_CALLBACK so there is no need to configure ONEKEY,GRV and all other sources that provides multiple combinations.Advantages:Service populates HCP Type with SubType & Specialty canonical codesHCP Type LOVs reduced to single source (HUB_CALLBACK) and canonical codesThe change in HCP Type RDM will be processed using standard reindex process.This change is impacting the Historical Inactive flow change described Snowflake: HI HCPType enrichment. Key features in new logic and what you should know:The change in HCP Type RDM will be processed using standard reindex process.Calculate the HCP TypeCode is based on the OV profile and Reltio canonical codesPreviously each source delivered data and the ingestion service calculated TypeCode based on RAW JSON data delivered by the source.Now we calculate on OV Profile, not on the source level.We deliver only one value using HUB_CALLBACK crosswalk.Now once we receive the event we have access to ov:true golden profileSpecialties, this is the list, each source has the SourceName and SourceRank, so we pick with Rank 1 for selected profile.SubTypeCode is a single attribute, and can pick only ov:true value.2 canonical cocdes are mapped to TypeCode attribute like on the below example Activation/Deactivation profiles in Reltio and Historical Inactive flowSnowflake: HI HCPType enrichmentSnowflake: History Inactive When the whole profile is deactivated HUB_CALLBACK technical crosswalks are hard-deleted, HCPTypeCode will be hard-deletedThis is impact HI Views because the HUB_CALLBACK value will be droppedWe implemented a logic in HI view that will rebuild TypeCode attribute and put this PRES/NON-PRESC in JSON file visible in HI view. Reltio contains the checksum logic and is not generating the event when the sourceCode changes but is mapped to the same canonical codeWe implemented a delta detection logic and we are sending an update only when change is detected Lookup to RDM, requeiers the logic to resolve HUB_CALLBACK code to canonical code. Change only when Type does not exists Type changes from PRESC to NON-PRESC Type changes from NON-PRESC to PRESCExample of new input values:attributes: "TypeCode": [ { "value": "HCPST.M-SP.AN" } ]TYP.M is a SubTypeCode source code mapped to HCPST.MSP.WDE.04 is a Speciality source code mapped to SP.ANrdm/lookupTypes/HCPSubTypeCode:HCPST.Mrdm/lookupTypes/HCPSpecialty:SP.ANFlow diagramLogical ArchitectureHCPType PreCallback LogicStepsOverview Reltio attributes and RDM                {                    "label": "Type",                    "name": "TypeCode",                    "description": "HCP Type Code",                    "type": "String",                    "hidden": false,                    "important": false,                    "system": false,                    "required": false,                    "faceted": true,                    "searchable": true,                    "attributeOrdering": {                        "orderType": "ASC",                        "orderingStrategy": "LUD"                    },                    "uri": "configuration/entityTypes/HCP/attributes/TypeCode",                    "lookupCode": "rdm/lookupTypes/HCPType",                    "skipInDataAccess": false                },Based on:SubTypeCode:                {                    "label": "Sub Type",                    "name": "SubTypeCode",                    "description": "HCP SubType Code",                    "type": "String",                    "hidden": false,                    "important": false,                    "system": false,                    "required": false,                    "faceted": true,                    "searchable": true,                    "attributeOrdering": {                        "orderType": "ASC",                        "orderingStrategy": "LUD"                    },                    "uri": "configuration/entityTypes/HCP/attributes/SubTypeCode",                    "lookupCode": "rdm/lookupTypes/HCPSubTypeCode",                    "skipInDataAccess": false                },Speciality:                        {                            "label": "Specialty",                            "name": "Specialty",                            "description": "Specialty of the entity, e.g., Adult Congenital Heart Disease",                            "type": "String",                            "hidden": false,                            "important": false,                            "system": false,                            "required": false,                            "faceted": true,                            "searchable": true,                            "attributeOrdering": {                                "orderingStrategy": "LUD"                            },                            "cardinality": {                                "minValue": 0,                                "maxValue": 1                            },                            "uri": "configuration/entityTypes/HCP/attributes/Specialities/attributes/Specialty",                            "lookupCode": "rdm/lookupTypes/HCPSpecialty",                            "skipInDataAccess": false                        },RDMCodes:rdm/lookupTypes/HCPType:HCPT.NPRSrdm/lookupTypes/HCPType:HCPT.PRESHCPType PreCallback LogicFlow:Component Startupduring the Pre-Callback component startup we are initializing in memory cache to store 2 PRESC and NPRES values for HUB_CALLBACK soruceThis implementation limits number of requests to RDM Reltio through managerAlso this limit number of API call manager service from pre-callback serviceThe Cache contains TTL configuration and is invalidated after TTLActivationCheck if feature flag activation is trueTake into account only the CHANGED and CREATED events in this pre-callback implementation limited to HCP objectsTake into account only profiles that crosswalks are not on the following list. When Profile contains the crosswalks that are related to this configuration list skip the TypeCode generation. When the Profile contains the following crosswalk and additionally valid crosswalk like ONEKEY generate a TypeCode.- type: not preconditions: - type: source values: - HUB_CALLBACK - NUCLEUS - LEGACYMDM - PFORCERX_IDStepsEach CHANGE or CREATE event triggers the following logic:Get the canonical code from HCP/attributes/SubTypeCode pick a lookupCode<fallback 1> if lookupCode is missing and lookupError exists pick a value<fallback 2> if the SupTypeCode does not exists put an empty value = ""Get the canonical code from HCP/attributes/Specialities/attributes/Specialty arraypick a speciality with Rank equal to 1pick a lookupCode <fallback 1> if lookupCode is missing and lookupError exists pick a value<fallback 2> if the Specialty does not exists put an empty value = ""Combine to canonical codes, using "-" hyphen character as a concatenation.possible values:<subtypecode_canonicalCode>-<speciality_canonicalCode><subtypecode_canonicalCode>-""""-<speciality_canonicalCode>""-""Execute delta detection logic:<transformation function>: using the RDM cache translate the generated value to PRESC or NPRES codeCompare the generated value with HCP/attributes/TypeCodepick a lookupCode and compare to generated and translated value<fallback 1> if lookupCode is missing and lookupError exists pick a value and compare to generated and not translated valueGenerate:INSERT_ATTRIBUTE: when TypeCode does not exitsUPDATE_ATTRIBUTE: when value is differentForward main event to next processing topic when there are 0 changes.TriggersTrigger actionComponentActionDefault timeIN Events incoming Callback Service: Pre-Callback:HCP Type Callback logicFull events trigger pre-callback stream and during processing, partial events are processed with generated changes. If data is in sync partial event is not generated, and the main event is forwarded to external clientsrealtime - events streamDependent componentsComponentUsageCallback ServiceMain component of flow implementationEntity EnricherGenerates incoming events full eventsManagerProcess callbacks generated by this serviceHub StoreHUB Mongo CacheLOV readLookup RDM values flow"
},
{
"title": "China IQVIA<->COMPANY",
"pageID": "263501508",
"pageLink": "/display/GMDM/China+IQVIA%3C-%3ECOMPANY",
"content": "DescriptionThe section and all subpages describe HUB adjustments for China clients with transformation to the COMPANY model. HUB created a logic to allow China clients to make a transparent transition between IQVIA and COMPANY Models. Additionally, the DCR process will be adjusted to the new COMPANY model. The New DCR process will eliminate a lot of DCRs that are currently created in the IQVIA tenant. The description of changes and all flows are described in this section and the subpages, links are displayed below. HUB processed all the changes in MR-4191 the MAIN task, To verify and track please check Jira.China Changes:China is now using the IQVIA model (createHCP operation)The goal realized in these changes is to have the same features as COMPANY model but China will use the IQVIA model (for China change should be transparent)current IQVIA PROD - https://eu-360.reltio.com/ui/FW2ZTF8K3JpdfFl (GBL PROD)new COMPANY PROD - https://ap-360.reltio.com/ui/sew6PfkTtSZhLdW/ (APAC PROD)Changes in Direct Channel (API) (input IQVIA model -> output COMPANY model transformation)Changes in Events Streaming (events) (input COMPANY model -> output IQVIA model transformation)Changes in map-channel. China GRV data in IQVIA model loaded to COMPANY modelCreate a Generic common transformation class:transformIqviaToCOMPANYtransformCOMPANYToIqviaDCR China adjustments to the COMPANY modelFlowsChina IQVIA - current flow and user properties + COMPANY changesOn this page, the current IQVIA flow for China users is described.User properties for China users, the DCR activation criteria.HUB components and China configuration used in HUBThe page contains also COMPANY changes and affected components that will be changedCreate HCP/HCO complex methods - IQVIA model (legacy)This page describes the HCP/HCO create API operations used in IQVIA, based on this logic new COMPANY logic was adjusted.Old logic is complicated and will be deprecated in the future.New logic contains the new solutions and was written in a more readable format. In the new logic, the DCR process is moved outside of the API to the external dcr-service-2 component.Create HCP/HCO complex V2 methods - COMPANY modelNew COMPANY logic for the creation of the HCP and HCO objects.Logic is divided into two sectionssimple - create an HCP/HCO object without affiliationscomplex - create an HCP/HCO object with affiliations Logic also triggers the DCR process if required.The new COMPANY code changes add the V1 and V2 prefixes to the API.Existing COMPANY model operations will be switched to V2 APIsIQVIA users will use V1 API - this is required to keep the old logic, in the future old V1 API will be deprecated and removed.V1/V2 APIs are transparent for the external clients, this is handled on the HUB sideDCR IQVIA flowOLD DCR IQVIA model logicDCR COMPANY flowNew DCR COMPANY model logicChina Selective Router - model transformation flowAdditionall, microservice used to transform COMPANY model events to IQVIA modelThe microservice used the predefined mapping and transforms the output events to the China target output topicThe logic contains also the Reference Attributes lookup like:L1 - get HCP → HCO (Workplaces using COMPANY ContactAffiliations)L2 - get HCO → HCO (MainHCO using COMPANY OtherHCOtoHCOAffiliations)The output HCP is combined and contains full information about all L1 and L2 objects (same as on IQVIA)Model Mapping (IQVIA<->COMPANY)Model mapping documentTransformation used during API calls or events streaming processing User Profile (China user)User Profile for China usercontains all details and configuration properties in one place.All DCR/Search/Trigger/CrosswalkGeneratrs are configured in one file and are shared across all HUB microservices. TriggersDescribed in the separated sub-pages for each process.Dependent componentsDescribed in the separated sub-pages for each process.Documents with HUB detailsmapping China_attributes.xlsxAPI: China_HUB_Changes.docxdcr: China_HUB_DCR_Changes.docx"
},
{
"title": "China IQVIA - current flow and user properties + COMPANY changes",
"pageID": "284805827",
"pageLink": "/pages/viewpage.action?pageId=284805827",
"content": "DescriptionOn this page, the current IQVIA flow is described. Contains the full API description, and complex API on IQVIA end with all details about HUB configuration and properties used for the China IQVIA model.In the next section of this page, the COMPANY changes are described in a generic way. More details of the new COMPANY complex model and API adjustments were described in other subpages. IQVIACurrent process notes:China uses the createHCP operation (the object with affiliation to HCO(Workplace) and MainHCO(Hospital))GRV source is the only source that creates DCRsCurrent operations used by ChinaIQVIA Kibana details: https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/app/r/s/BrC2vOperations:GetEntity (only used by event hub user)CreateHCORoute (china_apps)CreateHCPRoute (china_apps and map_channel)CreateDCRRoute (as a part of a createHCP route where DCR is executed)UpdateHCPRoute (china_apps)Users:eventhubchina_appsmap_channelSources:GRVEVRMDEFACECN3RDPARTYMap_ChannelGRV source is there with CN countryManagerManager affiliations activation and configuration\naffiliationConfig:\n hcpToL1HcoRefAttributeName:\n Workplace:\n - country: "CN"\n hcpToL2HcoRefAttributeName:\n MainWorkplace:\n - country: "CN"\n hcoToHcoRefAttributeName:\n MainHCO:\n - country: "CN"\n waitForNewHcoDCRApprove:\n - country: "CN"\n\n\nDCRs current legacy config\ndcrConfig:\n dcrProcessing: yes\n routeEnableOnStartup: yes\n deadLetterEndpoint: "file:///opt/app/log/rejected/"\n externalLogActive: yes\n activationCriteria:\n NEW_HCO:\n - country: "CN"\n sources:\n - "CN3RDPARTY"\n - "FACE"\n - "GRV"\n NEW_HCP:\n - country: "CN"\n sources:\n - "GRV"\n NEW_WORKPLACE:\n - country: "CN"\n sources:\n - "GRV"\n - "MDE"\n - "FACE"\n - "CN3RDPARTY"\n - "EVR"\n\n externalDCRActivationCriteria:\n - country: "CN"\n sources:\n - "CN3RDPARTY"\n - "FACE"\n - "GRV"\n\n continueOnHCONotFoundActivationCriteria:\n - country: "CN"\n sources:\n - "GCP"\n - countries:\n - AD\n - BL\n - BR\n - DE\n - ES\n - FR\n - FR\n - GF\n - GP\n - IT\n - MC\n - MF\n - MQ\n - MU\n - MX\n - NC\n - NL\n - PF\n - PM\n - RE\n - RU\n - TR\n - WF\n - YT\n sources:\n - GRV\n - GCP\n validationStatusesMap:\n VALID: validated\n NOT_VALID: notvalidated\n PENDING: pending\n\n delayPrcInSeconds: 3600\n dcrTopic: "{{env_name}}-gw-dcr-requests"\n\n\nUsers that use CN country in HUB:china_apps\n- name: "china_apps"\n description: "China applications access user"\n defaultClient: "ReltioAll"\n roles:\n - "CREATE_HCP"\n - "CREATE_HCO"\n - "UPDATE_HCO"\n - "UPDATE_HCP"\n - "GET_ENTITIES"\n - "RESPONSE_DCR"\n - "LOOKUPS"\n countries:\n - "CN"\n sources:\n - "CN3RDPARTY"\n - "MDE"\n - "FACE"\n - "EVR"\n\n\n\nmap_channel\n- name: "map_channel"\n description: "Map Channel (Handler) account"\n defaultClient: "ReltioAll"\n roles:\n - "UPDATE_HCP"\n - "CREATE_HCP"\n - "CREATE_HCO"\n - "DELETE_CROSSWALK"\n countries:\n - "CN"\n - "AD"\n…\n sources:\n - "GRV"\n - "GCP"\n\n\nCallback-Service:refLookupConfig\nrefLookupConfig:\n - country: CN\n maxDepth: 2\n useCache: true\n entities:\n - type: HCP\n attributes:\n - Workplace\n - type: HCO\n attributes:\n - MainHCO\nThe callback service is adding enrichment to HCP. Workplace and HCP.Workplace.MainHCO objects In mongo and in published events we are storing more information than the Reltio. The result is that we have the HCP full data and Workplace and full data and Workplace.MainHCO full data. The MainHCO Workplace is enriched by Workplace references. The Mongo and Publisher move to China data that contains full information in these objects.Published events and Mongo are enriched with this data.Event publisher:\n- id: hcp-china\n selector: "(exchange.in.headers.reconciliationTarget==null)\n && exchange.in.headers.eventType in ['full']\n && exchange.in.headers.country in ['cn']\n && ['CN3RDPARTY', 'MDE', 'FACE', 'EVR', 'GRV', 'GCP', 'Reltio'].intersect(exchange.in.headers.eventSource)\n && exchange.in.headers?.eventSubtype.startsWith('HCP_')"\n destination: "prod-out-full-mde-cn"\nPublishing of china events and sources HCP entities, full events (data is trimmed)COMPANYThe key concepts and general description of COMPANY adjustments:Current IQVIA flow should work only on old IQVIA Tenant and will be deprecated in the futureOn the new COMPANY model there will be V1 and V2 APIs versions transparent for the external client, the V2 is a new logic that will be used by all clients and also a China client with the IQVIA modelOptimization of /batch/hcp method is made as a part of these changes because now all APIs allow to the provision of the list of entities. Created methods:New Service V2 ( input bulk or single entity)- POST/PATCH HCP (simple method without affiliated HCO) (array of entities)- POST/PATCH HCP (complex method with affiliated HCO) (array of entities)- POST/PATCH HCO (array of entities)- POST/PATCH MCO (array of entities)Transformation executed if:Source: IQVIA (user profile configuration)Target: COMPANY (user profile configuration)Then execute the transformation and complex with affiliated HCOAPI-router service will be used to make a transparent transition between V1 and V2 APIs2 methods v1 and v2All COMPANY clients using the COMPANY model will be switched to V2V1 will be removed in the future after IQVIA will be deprecatedTransformation LIB (full description on the different subpage):transformIqviaToCOMPANYtransformCOMPANYToIqviaUser Profile - Feature switchIQVIA vs COMPANY model on user configuration:User Profile objects will be provided. In one file whole configuration shared across all components will be present. Publishing changes:China Selective Router - new microservice translates China events from the IQVIA model to the COMPANY modelInput: China COMPANY model topicEnrich HCP with HCO data (workplace/mainHCO)Output: target COMPANY modelOpen API Documentation on CamelSwagger UI contains the whole API description, and API documentation is managed in code and automatically generated. DCR processIntegrate manager complex method with dcr-service-2 (using triggers) Create requests that have the model in dcr-service-2K8s separated environmentAPAC-China-DEV is a separate environment used for the China testing. The environment is set up dynamically on K8sThe component changes related to this adjustment:Reltio-Subscriber component is working on DEV as an events router:There is only one SQS queue, but 2 output topics in the subscriber publisher. The event router makes a decision if we need to move this event to APAC-DEV or CHINA-DEV (e.g. china profiles tagged with china-test-cases). Reltio-subscriber reads the tag name and pushes this event to topic {tag-name} specified number of tag names allowed in publishing to output topic 2 profiles test mode. PROD normal mode by default normal PROD mode Manager Changes Create HCP/HCO operations used by HUB automated integration tests adding the China-TEST tag that is routed only to CHINA-DEV environment HCP Service Complex (POST/PATCH) V2 Key concepts and changesCrosswalk Generator - configured in User Profile -allows to automatically generate a crosswalk when missing:(common) CrosswalkGenerator first type (implementation) UUID generator (autofill: Type <>, Value: <UUID generator>, SourceTable:)associated with the Service and User (when the user does not provide the crosswalk we can generate an HCP or HCO crosswalk)For example if the missing HCP.affilaitedHCO crosswalk then we will generate a new oneFind Service - configured in User Profile - contains the implementation of multiple search cases. User can be configured to use a specific set of searches. Used for example to find Workplace related to the HCP in Complex V2 API.Find Object Method (_findObject (getByUri/getByCrosswalk/getByName e.t.c.):UserProfile configuration drivenInput entity objectSearch ByrefEntity ObjectURICrosswalkSearch method (Reltio (?filter) ) getByName (search by Reltio Name attribute - configurable)...There is a possibility to add multiple different searches or configure current searches by defining the attributes namesTrigger - configured in User Profile. Contains the Trigger mode implementation. The trigger is executed in the following situation:Find Service execution → result → decision to be madeDecisionFoundCreate ContactAffilations with Workplace and MainWorkplace ( create ReferenceAttributes) -> HCPNotFoundUserProfile: TriggerType configurationFunction result (ACCEPT OR REJECT + ObjectToCreate)TriggerTypeCREATE (ACCEPT , object)IGNORE (ACCEPT , nullObject)REJECT  (REJECT , nullObject)DCR (ACCEPT, DCRObject)(custom function can be Lookup) (customFunction(Object) (return CREATE/IGNORE/REJECT)) - for example used in China to lookup to the STD_DPT name in RDM and make a decision based on RDM lookup result. "
},
{
"title": "China Selective Router - model transformation flow",
"pageID": "284800572",
"pageLink": "/display/GMDM/China+Selective+Router+-+model+transformation+flow",
"content": "DescriptionChina selective router was created to enrich and transform event from COMPANY model to IQIVIA model. Component is also able to connect related mainHco with hco, based on reltio connections API, in Iqivia model its reflected as MainHco in Workplace attribute.Flow diagramStepsCollect event from input topicEnrich event - based on configuration collect hco and main hco entitiesfind attribute with refEntity uri call reltio thrue mdm-manager to collect all related hco and mainHco entities return event with list of hco, and list of mainHcoConnect hco with mainHco based on reltio connections and put mainHco attribute to hcoiterate by list of hco and call reltio to list of connection for current hcoif connection list is not empty and contains entity uri from list of mainHcoput exisitng mainhco to hco in 'OherHcoToHco' attribure (Name of attibute can be changed in configuration)Transform event from COMPANY model to Iqivia modelinvoke HCPModelConverter wiht base evnet, list of hco and list of mainHcoresult of converter will be entity in Iqivia modelput entity in output EventSend event to output topicTriggersTrigger actionComponentActionDefault timekafka messageeventTransformerTopologytransform event to Iqivia modelrealtimeDependent componentsComponentUsageMdm managergetEntitisByUrigetEntityConnectionsByUriHCPModelConvertertoIqviaModel"
},
{
"title": "Create HCP/HCO complex methods - IQVIA model (legacy)",
"pageID": "284800564",
"pageLink": "/pages/viewpage.action?pageId=284800564",
"content": "DescriptionThe IQVIA China user uses the following methods to create the HCP HCO objects - Create/Update HCP/HCO/MCO. On this linked page the API calls flow is described. The most complex and important thing is the following sections for China users:Additional logic that is activated in the following cases:3 - during HCO update parentHCO attribute is delivered in the request4 - during HCP create/update affiliations are delivered in the request5 - during HCP/HCO creation based on the configuration-specific sources are enriched with cached Relation objects and this object is injected into the main Entity as the reference attributeIQVIA China user also activates the DCR logic using this Create HCP method. The complex description of this flow is here DCR IQVIA flowCurrently, the DCR activation process from the IQVIA flow is described here - DCR generation process (China DCR)New DCR COMPANY flow is described here: DCR COMPANY flowThe below flow diagram and steps description contain the detailed description of all cases used in HCP HCO and DCR methods in legacy code.Flow diagramStepsHCP Service = China logic / STEPS:China Quality Rules:The following files contain the China DQ rules in IQVIA - executed once HUB receives the JSON from the Client.DQ rules are self-documented, details can be found in the following files: affiliatedHCO : affiliatedhco-country-china-quality-rules.yamlHCP:hcp-country-china-quality-rules.yaml(common) qualityServicePipelineProvider execute DQ rules file(common) dataProviderCrosswalkGuardrail execute GuardRailsAffiliatedHCO LOGIC (affiliatedHCOs attribute):DQ Rules check and validation on affiliatedHCOIf empty -> add only Country from HCP and Crosswalk from HCPIf not empty -> affiliatedHCOsEntity is combined as one entity from all attributes from all arrays with Country from HCP and Crosswalk from HCPCreating affiliation logic is activated when affiliatedHCOs exist and is not emptyCreate ParameterHelper:Update (true/false) (PATCH/POST)autoCreateHCO is used in isAutoCreateHCO method below. It activates create HCO operation for MAPP and CRMMI for all countries when affiliatedHCO is not found. \naffiliationConfig:\n autoCreateHCO:\n - country: "ALL"\n sources:\n - "MAPP"\n - "CRMMI"\n\n\nRUN affiliationCreator.mapAndReplaceHospitalThe logic was designed to get MainHCO from affiliatedHCO and find this in Reltio. Only 1 element of MainHCO can exist.Then executes the SEARCH LOGIC (by uri/crosswalk/attributes) and gets AUTO rules result.The result is set the MainHCO.objectUri=Reltio found URI (object from the request is assigned and exists Reltio id)Then in the next methods, MainHCO contains the copy of all attributes from Reltio (the object is different than received from the client)For each affiliatedHCOs do:extractL2HCO [MainHCO] from affiliatedHCOs: (it means get MainHCO - Hospital - from affiliatedHCO)when > 1 -> Exception HCPMappingException(String.format("HCO has more than 1 affiliated HCO")when =1 assign to new Entity object:attributes (copy MainHCO.attributes)crosswalk = MainHCO.refEntity.Crosswalkuri = MainHCO. refEntity.ObjectUrinow on returned Hospiatl do:[SEARCH LOGIC] COMPANY.mdm.model.client.ReltioMDMClient#findEntity[SEARCH LOGIC] shared across all China searches on HCP and HCO servicesFind by ObjectURIOrFind by CrosswalkOrFind by Match API (entities/_matches) where JSON body in MainHCO entity:Verify matches resultCheck only .*Auto.* rulesresultSize > 1 - return nullif there are more than 2 entities with different uris - return  return nullif 1 match returns entityIf Search result == null -> EntityNotFoundxception hospital not found If found result then: set the Hospital Reltio Uri in affiliatedHCO.MainHCO.refEntity.objectUri, and copy all attributes from Reltio to MainHCO(replace MainHCO + trim)Hospital is found and have the Reltio URIRUN affiliationCreator.mapAndCreateHCO returns the mappedHCOs arrayThe main logic of this method is to create a Workplace with MainHCO in Reltio and assign the URI received from Reltio (China) or Create affilaitedHCO object (MAPP and CRMII)For each affiliatedHCOs doFirst Check - "HCO map dict is set, map and create standardized HCO"if (helper.getHCORDMMDict() ( means if CN then return LKUP_STD_DEPARTMENTS )logic:add do mappedHCOs (mapAndCreateStandardizedHCO)The result of this function is to set the AffilaitedHCO(Workplace).URI based on the Reltio search.We translate AffilaitedHCO.Name using RDM LKUP_STD_DEPARTMENTS  code and then make a search in Reltio.If found set URI from ReltioIf not found execute CreateHCO method and assign URI from Reltio based on created objects.IF affiliatedHCO.Name is null, exit.else Lookup Reltio translate the affiliatedHCO.Name using the lookup function to Reltio with code= LKUP_STD_DEPARTMENTS and Source=HCP.crosswalkIf OK and the code exitsSet Department HCO name to response code (affiliateHCO.Name changed)IF DEPARTMENT NAME is not found in RDM break and exit. This may cause that the Workplace will be not found and you will receive the error - HCO Entity no foundFind L1 entity (affiliatedHCO) (logic same as [SEARCH LOGIC]) (here we search affiliatedHCO with MainHCO attribute)If found set affiliateHCO.uri = reltioFoundUriElse“Create Department (L2 HCO) automatically” for ChinaGet affiliatedHCO.MainHCO object and assing to MainHCOaffiliatedHCO.MainHCO- NULL/CLEARThis clear/null on affiliatedHCO.MainHCO is required because we are executing the CREATE_HCO operation with 2 objects. 1. affiliatedHCO 2. MainHCO (parentHCO in HCO operation)This will create an HCO object with MainHCO in ReltioaffiliatedHCO.MainHCO- SET crosswalk to EVR with Random UUIDExecute logic [HCO Service = China logic / STEPS (check below)] (parameters 1= procEntity(affiliatedHCO), 2=MainHCO)check creation result:notFound -> NotFoundExceptionfailed -> RuntimeExceptionOK, -> set affiliateHCO.uri = reltioFoundUriSecond Check “Create or update affiliated HCO”FOR CRMMI and MAPP for affiliatedHCOs create the HCO in Reltio and assign the Reltio URI to affilaitedHCOs URI automatically without search and DPT lookup.isAutoCreateHCO logic based on ParameterHelper param currently PROD activated for CRMMI and MAPP for all countrieslogic:Execute logic [HCO Service = China logic / STEPS (check below)] (parameters 1= procEntity(affiliatedHCO), 2=null) - send only Workplace without HospitalHere we are adding parentHCO to the HCO request. Parent HCO is affiliatedHCO object.check creation result:failed -> RuntimeOK -> set affiliateHCO.uri = reltioFoundUriThird Check “HCO auto-creation is disabled”just return the affiliatedHCO without the Reltio URI assingRUN createHCOAffiliations (Create affiliation to L1 and L2 HCO) creating affiliation HCP to HCOExtends HCP object with MainWorplace(affilaitedHCO.MainHCO) and Workplace(affiliatedHCO) referenced AttributesFor each affiliatedHCOs doExtract MainHCO object (this will be MainWorkplace on HCP)If empty throw RuntimeExceptionIf existsRUN createAffilationAsRef - l2HCORefName = MainWorkplace ----------- Creating MainWorkplace relation from HCP to MainHCOLogic that creates MainWorkplace affiliation between HCP and MainHCO or Workplace affiliation between HCP and affiliatedHCO (used here and below)Below we add 2 more attributes to refEntity - Workplace.ValidationStatus and Workplace.ValidationChangeDateIf MainHCO.objectURi exits. OKELSE search - (here objectUri will be, this search is used in CREATE_HCO method)If still not found throw NotFoundExceptionElse assign the HCP RefEntity and RefRelation attributeson MainWorkplaceRefEntity MainHCO.ObjectURIRefRelation Crosswalk (sourceTable=MainWorkplace,type=HCP.crosswalk.type,value=HASH)Attributes - emptyThen check if the same relation on HCP already exists comparing the MainWorkplace attribute with generated crosswalkIf this is a new Relation add to HCP a new attribute that is MainWorkplaceRewriting validation status from main entity or set from HCO entity preprare reference attributes on WorkplaceRefEntity attributes set from:ValidationStatus or hcp.ValidationStatusValidationChangeDate or hcp.ValidationChangeDateRUN createAffilationAsRef - l2HCORefName = WorkplaceSame logic as above but:----------- Creating Workplace relation from HCP to affiliatedHCOResult HCP contains MainWorkplace and Workplace refRelation attributesAffiliatedHCO LOGIC throws in some places EntityNotFoundException - process this exception here:activate DCR LOGICCreate NEW_HCO("NewHCO") DCR with HCP entity and affiliatedHCOs Check if NEW_HCO is in activationCriteria for CN (GRV/FACE/CN3RDPARTY) Then check continueOnHCONotFoundActivationCriteria for China only GCP this will create HCP (continue) without affiliation(common) Reference Relation Attributes Enricher for HCP Object (relations taken from Mongo Relation Cache)CREATE HCP Reltio method - Main HCP create an object in ReltioCheck response:(common) Register COMPANYGlobalCustomerIdactivate DCR LOGIC If NEW_HCO DCR send DCR Request related to affiliatedHCOs and put this DCR to dcrRequestIf dcrRequest does not contains NEW_HCO DCRCreate NEW_HCP DCR Request with affiliatedHCO and send DCR RequestIf dcrRequest does not contains NEW_HCO DCRCreate NEW_WORKPLACE DCR Request and send DCR REQUEST(common) resolve status set created/update/failed/e.t.c(common) ValidationException/EntityNotFoundException/HCPMappingException/ExceptionEND HCO Service = China logic / STEPS:China Quality Rules:The following files contain the China DQ rules in IQVIA - executed once HUB receives the JSON from the Client.DQ rules are self-documented, details can be found in the following files: HCO: hco-country-china-quality-rules.yaml(common) qualityServicePipelineProvider execute DQ rules(common) dataProviderCrosswalkGuardrail execute GuardRailsParentHCO ↔ AffiliatedHCO LOGIC (parentHCO attribute processing):RUN createAffilationAsRef - = MainHCO ----------- Creating MainHCO relation from HCO to parentHCOIf parentHCO.objectURi exits, ok. (the objectURi can be from HCP create methods but can be also emptu)ELSE -> [SEARCH LOGIC]COMPANY.mdm.model.client.ReltioMDMClient#findEntity (described in HCP section)If still not found throw NotFoundException -> Parent HCO not foundElse if found in ReltioAdjust HCO object and put MainHCO ref attribute: RefEntity parentHCO.ObjectURIRefRelation Crosswalk (sourceTable=MainHCO,type=HCP.crosswalk.type,value=HASH)Attributes - emptyThen check if the same relation on HCP already exists comparing the MainHCO attribute with generated crosswalkIf this is a new Relation add to HCO a new attribute that is MainHCO(common) Reference Attributes Enricher for HCP ObjectCREATE HCO Reltio method - HCO create an object in ReltioCheck response:(common) Register COMPANYGlobalCustomerId(common) resolve status set created/update/failed/e.t.c(common) ValidationException/EntityNotFoundException/HCPMappingException/ExceptionENDTriggersTrigger actionComponentActionDefault timeoperation linkREST callManager: POST/PATCH /hco /hcp /mcocreate specific objects in MDM systemAPI synchronous requests - realtimeCreate/Update HCP/HCO/MCOREST callManager: GET /lookupget lookup Code from ReltioAPI synchronous requests - realtimeLOV readREST callManager: GET /entity?filter=(criteria)search the specific objects in the MDM systemAPI synchronous requests - realtimeSearch EntityREST callManager: GET /entityget Object from RetlioAPI synchronous requests - realtimeGet EntityKafka Request DCRManager: Push Kafka DCR eventpush Kafka DCR EventKafka asynchronous event - realtimeDCR IQVIA flowDependent componentsComponentUsageManagersearch entities in MDM systemsAPI Gatewayproxy REST and secure accessReltioReltio MDM systemDCR ServiceOld legacy DCR processor"
},
{
"title": "Create HCP/HCO complex V2 methods - COMPANY model",
"pageID": "284800566",
"pageLink": "/pages/viewpage.action?pageId=284800566",
"content": "DescriptionThis API is used to process complex HCP/HCO requests. It supports the management of MDM entities with the relationships between them. The user can provide data in the IQVIA or COMPANY model.Flow diagramFlow diagram HCP (overview)(details on main diagram)Steps HCP Map HCP to COMPANY modelExtract parent HCO - MainHCO attribute of affiliated HCO entityExecute search service for affiliated HCO and parent HCOIf affiliated HCO or parent HCO not found in MDM system: execute trigger serviceOtherwise set entity URI for found objectsExecute HCO complex service for HCO request - affiliated  HCO and parent HCO entitiesMap HCO response to contact affiliations HCP attributecreate relation between HCP and affiliated HCOcreate relation between HCP and parent HCOExecute HCP simple serviceHCP API search entity serviceSearch entity service is used to search for existing entities in the MDM system. This feature is configured for user via searchConfigHcpApi attribute. This configuration is divided for HCO and affiliated HCO entities and contains a list of searcher implementations - searcher type.attributedescriptionHCOsearch configuration for affiliated HCO entityMAIN_HCO search configuration for parent HCO entitysearcherTypetype of searcher implementationattributesattributes used for attribute search implementationHCP trigger serviceTrigger service is used to execute action when entities are missing in MDM system. This feature is configured for user via triggerType attribute.trigger typedescriptionCREATEcreate missing HCO or parent HCO via HCO complex APIDCRcreate DCR request for missing objectsIGNOREignore missing objects, flow will continue, missing objects and relations will not be createdREJECTreject request, stop processing and return response to clientFlow diagram HCO (overview)(details on main diagram)Steps HCOMap HCO request to COMPANY modelIf hco.uri attribute is null then create HCO entityCreate relationif parentHCO.uri is not null then use to create other affiliationsif parentHCO.uri is null then use search service to find entityif entity is found then use is to create other affiliationsif entity is not found then create parentHCO and use to create other affiliationsif Relation exists then do nothingif Relation doesn't exist then create relationTriggersTrigger actionComponentActionDefault timeREST callmanager POST/PATCH v2/hcp/complexcreate HCP, HCO objects and relationsAPI synchronous requests - realtimeREST callmanager POST/PATCH v2/hco/complexcreate HCO objects and relationsAPI synchronous requests - realtimeDependent componentsComponentUsageEntity search servicesearch entity HCP API opertaionTrigger serviceget trigger result opertaionEntity management serviceget entity connections"
},
{
"title": "Create HCP/HCO simple V2 methods - COMPANY model",
"pageID": "284806830",
"pageLink": "/pages/viewpage.action?pageId=284806830",
"content": "DescriptionV2 API simple methods are used to manage the Reltio entities - HCP/HCO/MCO.They support basic HCP/HCO/MCO request with COMPANY model.Flow diagramSteps Crosswalk generator - auto-create crosswalk - if not exists Entity validationAuthorize request - check if user has appropriate permission, country, sourceGetEntityByCrosswalk operaion-  check if entity exists in reltio, applicable for PATCH operationQuality service - checks entity attributes against validation pipelineDataProviderCrosswalkCheck - check if entity contributor provider exists in reltioExecute HTTP request - post entities Reltio operationExecute GetOrRegister COMPANYGlobalCustomerID operation Crosswalk generator serviceCrosswalk generator service is used for creating crosswalk when entity crosswalk is missing. This feature is configured for user via crosswalkGeneratorConfig attribute.attributedescriptioncrosswalkGeneratorTypecrosswalk generator implementation typecrosswalk type valuesourceTablecrosswalk source table valueTriggersTrigger actionComponentActionDefault timeREST callManager: POST/PATCH /v2/hcpcreate HCP objects in MDM systemAPI synchronous requests - realtimeREST callManager: POST/PATCH /v2/hcocreate HCO objects in MDM systemAPI synchronous requests - realtimeREST callManager: POST/PATCH /v2/mcocreate MCO objects in MDM systemAPI synchronous requests - realtimeDependent componentsComponentUsageCOMPANY Global Customer ID RegistrygetOrRegister operationCrosswalk generator servicegenerate crosswalk opertaion"
},
{
"title": "DCR IQVIA flow",
"pageID": "284800568",
"pageLink": "/display/GMDM/DCR+IQVIA+flow",
"content": "DescriptionThe following page contains a detailed description of IQVIA DCR flow for China clients. The logic is complicated and contains multiple relations.Currently, it contains the following:Complex business rules for generating DCRs,Limited flexibility with IQVIA tenants,Complex end-to-end technical processes (e.g., hand-offs, transfers, etc.)The flow is related to numerous file transfers & hand-offs.The idea is to make a simplified flow in the COMPANY model - details described here - DCR COMPANY flowThe below diagrams and description contain the current state that will be deprecated in the future.Flow diagram - Overview - high levelFlow diagram - Overview - simplified viewStepsHUB LOGICHUB Configuration overview:DCR CONFIG AND CLASSES:Logic is in the MDM-MANAGERNewHCODCRService - related to NEW_HCO, NEW_HCO_L1, NEW_HCO_L2NewHCPDCRService - related to NEW_HCPNewWorkplaceDCRService - related to NEW_WORKPLACE Config:\ndcrConfig:  \n dcrProcessing: yes\n  routeEnableOnStartup: yes\n  deadLetterEndpoint: "file:///opt/app/log/rejected/"\n  externalLogActive: yes\n  activationCriteria:\n    NEW_HCO:\n      - country: "CN"\n        sources:\n          - "CN3RDPARTY"\n          - "FACE"\n          - "GRV"\n    NEW_HCP:\n      - country: "CN"\n        sources:\n          - "GRV"\n    NEW_WORKPLACE:\n      - country: "CN"\n        sources:\n          - "GRV"\n          - "MDE"\n          - "FACE"\n          - "CN3RDPARTY"\n          - "EVR"\n\n  continueOnHCONotFoundActivationCriteria:\n    - country: "CN"\n      sources:\n        - "GCP"\n    - countries:\n        - AD\n        - BL\n        - BR\n        - DE\n        - ES\n        - FR\n        - FR\n        - GF\n        - GP\n        - IT\n        - MC\n        - MF\n        - MQ\n        - MU\n        - MX\n        - NC\n        - NL\n        - PF\n        - PM\n        - RE\n        - RU\n        - TR\n        - WF\n        - YT\n      sources:\n        - GRV\n        - GCP\n  validationStatusesMap:\n    VALID: validated\n    NOT_VALID: notvalidated\n    PENDING: pending\nFlow diagram - DCR ActivationStepsIQVIA/China  ACTIVATION LOGIC/ACTIVATION CRITERIA:COMPANY.mdm.manager.service.dcr.NewHCPDCRService#isActive :(common) on IQVIA the first check is on the source and country(common) NEW_HCP is activated for CN for GRV source (TRUE ACTIVATE)(common) NEW_HCO is activated for CN for CN3RDPARTY, FACE, GRV source (TRUE ACTIVATE)(common) NEW_WORKPLACE is activated for CN for GRV, MDE, CN3RDPARTY, FACE, EVR source (TRUE ACTIVATE)The first 3 isActive checks are related to common checks, here we are checking the country and source of the HCP and then we can verify more details.(REVALIDATION LOGIC) Then we check if the flag on DCR is revalidated=trueIf trueGet From Reltio the current ChangeRequest state by entityUri( Reltio Change requests connected to the entity)Remove all AWAITING_REVIEW with type NEW_HCPCheck HCP validation statusesConfigured statuses: "pending", "partial-validated", "partialValidated"From Entity get ValidationStatus attributeCompare valuesIf match foundGet EVR crosswalksPatch entity using EVR crosswalk set ValidationStatus to pending(NEW HCP isActive LOGIC) activation logic check (detailed):NEW_HCP detailed ACTIVATORCheck if ValidationStatus is pendingIf False: ValidationStatus is NOT pending:Check current ValidationStatus valueIf OV ValidationStatus is "notvalidated" or "partialValidated" do further checks:Get GRV LUD CrosswalkGet (EVR)DCR LUD Crosswalk(Check) if EVR changes are fresher then the GRV changes on ValidationStatus return FALSEGet GRV ValidationStatus current valueIf pending or partialValidated go to “If true, next”else return FALSEotherwise reject return FALSEIf true, next(Check) SpeakerStatus value and check if not "actv","enabled" then return FALSE(Check)Get Change Requests from Reltio with AWAITING_REVIEW if found return FALSE(Check)Get Entity State from Reltio, if null return FALSE(Check) Get For China the HCP.Workplace and check if exists, if null return FALSEFinally if above checks were not fulfilled return (TRUE ACTIVATE)(NEW HCO isActive LOGIC) activation logic check cd:NEW_HCO detailed ACTIVATORGet ValidationStatus value from source HCP entityCheck if ValidationStatus is equal to "enabled","validated","pending","WBR.STA.3", "partial-validated", "partialValidated"If true return FALSE DCR is not activated for these statusesNext go to next Check(Check) SpeakerStatus value and check if not "actv","enabled" then return FALSEGET MainHCO.Name attributeGet Workplace.Name attributeNow once we have Workplace and Hospital Name we need to:Get ChangeRequest details from Reltio related to this specific HCPCheck if any info in ChangeRequest containsHospital nameOr Department nameIf true it means that there are already some DCRs created in Reltio for this HCP in relation to this Department/WorkplaceReturn REQUST_ALREADY_EXISTS and return FALSE (not activated)Finally, if above checks were not fulfilled return (TRUE ACTIVATE)(NEW WORKPLACE isActive LOGIC) activation logic check cd:NEW_WORKPLACE  detailed ACTIVATORGet ValidationStatus value from source HCP entityCheck if ValidationStatus is equal to "enabled","validated", "WBR.STA.3"If true return FALSE DCR is not activated for these statusesNext go to next Check(Check) SpeakerStatus value and check if not "actv","enabled" then return FALSE(Check) Verify HCP.Workplaces if null - return FALSE (not activate)Next check HCP.Workplaces, check all elements andRemove duplicated refEntity.objectUrisRemove Workplaces with "enabled","validated","pending" ValidationStatusesCheck the output list if there are 0 Workplaces or workplaces.size() <2 then return FALSE, there are less than 2 workplaces so rejectNow filter Workplaces and find TrustedWorkplaces, check all elements andIf there are any workplaces related to (EMPTY) crosswalk name then filter them out, currently make DCR for all because the condition is not metCheck ChangeRequests connected with the current HCPGet ChangeRequest details from Reltio related to this specific HCPCheck if any info in ChangeRequest contains DCR created for the current Workplace for which we are trying to create DCRIf true it means that there are already some DCRs created in Reltio for this HCP in relation to this WorkplaceReturn REQUST_ALREADY_EXISTS and return FALSE (not activated)Finally, if the above checks were not fulfilled return (TRUE ACTIVATE)Kafka DCR sender - produce event to Kafka TopicCOMPANY.mdm.manager.service.dcr.AbstractDCRService#sendDCRRequest KAFKA EVENT DCR SENDSend a request from HCP Management Service:DCRRequest class published to Kafka DCR topic prod-gw-dcr-requestsFlow diagram - DCR event Receiver (DCR processor)StepsReceiver (DCR processor) (Camel) - COMPANY.mdm.manager.route.DCRServiceRoute LOGIC:DCRServiceRouteReceive dcr request: ${body} log input DCR bodyCheck Delay time and postpone the DCR to next runtimeDelay = Current Time DCR Create Time (in HCP Service new object initialization time)if timeDelay < 240 minDelay based on kafka session or delayTime (depending what is lower value)Thread SleepNote: current sessionTimeout on PROD is 30 secondsElse Proceed the DCRExecute com.COMPANY.mdm.manager.service.dcr.AbstractDCRService#processDCRRequest LOGIC:(common) Get From Reltio current Entity State(common) Check Activation (only abstract, by source and country) criteria, if active true:(common) Start processing DCR request(common) Create Change Request in Reltio (empty container)(common) Add External InfoHCPWithHCOExternalInfo objectSet NEW_HCP/HCO/WORKPLACE typeSet Reltio HCP RUISet Source entity crosswalkProcess DCR Custom Logic (NEW_HCP/NEW_HCO/NEW_WORKPALCE),Description belowUpdate in Reltio the Change Request with created External InfoInitialize PfDataChangeRequest objectPfDataChangeRequest object is used by IQVIA and this is exported in excel file to ChinaStatus = CreatedCrosswalk EVRIn case of error delete Reltio ChangeRequest (container) and throw ExceptionIf ok set the status to ACCEPTEDOtherwise REJECTEDNewHCPDCRService - STEPS  - Process DCR Custom Logic (NEW_HCP)NEW_HCP custom logicCreate a new HCP type Entity (java object) EVR/DCRSet ValidationStatus to validatedSet Crosswalk = EVR get existing or create newPATCH Entity HCP Object to Reltio using change request id (update existing container only)In ExternalInfo set affilaitedHCOs objectNewHCODCRService - STEPS  - Process DCR Custom Logic (NEW_HCO, NEW_HCO_L1,NEW_HCO_L2)NEW_HCO custom logicCreate a new HCP type Entity (java object)Set crosswalks from the HCP entitySet ExternalInfo department and hospital names Get department name from DCR Request form HCP WorkplaceGet hospital name from DCR Request from HCP Workplace.MainHCOExecute COMPANY.mdm.manager.service.dcr.NewHCODCRService#processAffiliations (method return status: 1 NEW_HCO_L1(Workplace) or 2 NEW_HCO_L2(MainHCO), logic:Get affiliatedHCOs, for each element doFind L2HCO entity:Get MainHCO element from affiliatedHCO objectIf is null, return nullIf not nullFind object in Reltio using GetEntity operationIf not foundSet EVR crosswalk on MainHCOPOST Entity HCO(MainHCO) Object to Reltio using change request id (update existing container only)And return object/entityURIIf found return object/entityURIFind L1HCO entity:Check if L2HCO is not null, then replace MainHCO attributes using the one found from Reltio and set refEntity uriFind Entity using standard search API(by uri/crosswalk/match)If not foundSet EVR crosswalkRemove MainHCO(L2) from L1 objectsetup affiliation l1HCO -l2HCO (using reference attributes add to Workplace MainHCO reference attribute to create a relation between these 2 objectsPOST Entity HCO(Workplace with MainHCO) Object to Reltio using change request id (update existing container only)And return object/entityURIIf found return object/entityURISet ExternalInfo enrich with:affliatedHCO that contains L1+L2 objectsSet status:2 - If L2HCO URI is null1 if L1HCO URI is nullclear MainHCO to avoid Reltio errorif L2HCO existsadd MainWorkplace reference attribute to HCP with reference to L2 object (MainHCO)add Workplace reference attribute to HCP with reference to L1 object (affiliatedHCO)PATCH Entity HCP Object to Reltio using change request id (update existing container only)Return 1 or 2 If status = 1 set NEW_HCO_L1 dcr type in externalInfoIf status =2 set NEW_HCO_L2 dcr type in externalInfoOtherwise, DCR is not valid, all affiliations found, create affiliation without DCRCreate an HCP entity in ReltioDelete ChangeRequestNull DCR Request (DCR is not valid in that case)NewWorkplaceDCRService - STEPS  - Process DCR Custom Logic (NEW_WORKPLACE)Get HCP entity from DCR objectGet Workplace attributesRemove duplicated Workplace entityUris objectsFind HCO workplaces in Reltio using GET operation and save EntityURisExecute the COMPANY.mdm.manager.service.dcr.NewWorkplaceDCRService#updateAffiliationsLogic (response = false)The method input is HCPDCR IDList of AffiliatedHCOs(Workplaces) found in Reltio by GET operationThe result is HCP+HCO created in the Change requestFlowGet the Change request parameterGet HCP source Entity from ReltioRemove changes from Change RequestCreate HCP Object new Java empty elementSet crosswalk to EVRCreate acceptedWorkplaces(SET) and add all Workplaces found in RletioGet Workplaces from HCP object from ReltioSet workpalcesURIS toIf response=true get from ExternalInfo from affiiatedHCOs URISIf response-false get from Workpalces from HCP object from ReltioFor each WrokplaceURI do:Get Entity HCO from Reltio ObjectPATCH Entity HCP Object to Reltio using change request id (update existing container only) the input request is HCP object + affilaitedHCOs object found from ReltioOverride the ExternalInfo affiliatedHCOsUris with new ids created in ReltioIn the ExternalInfo set the affilaitedHCOs array to EntityURIS found in ReltioFlow diagram - DCR Response - process DCR Response from API clientStepsIQVIA/China DCRResponseRoute:DCR response processing:REST apiActivated by china_apps user based on the IQVIA EVRs export Used by China Client to accept/reject(Action) DCR in ReltioDCRResponse (Camel) route, possible operations:POST (dcr_id,action)Dcr_id Reltio Change Request IdAction accept/updateHCP/updateHCO/ updateAffiliations/reject/merge/mergeUpdateAuthentication service, check user and roleCheck headersDcr_id is mandatorymergeUris structure is winner,looser with 2 idsCheck if DCR in Reltio exists, otherwise throw NotFoundException and update the PfDataChangeRequest object in Reltio to closedLogic:If ChangeRequest in Reltio is other than AWAITING_REVIEW throw BadRequestException with details that DCR is already closed (because it means it is now ACCEPTED or REJECTED)Elseupdate the PfDataChangeRequest object in Reltio to completedCheck Action and do (FOR NEW_HCP):Accept: NEW_HCP acceptDCRCompose Entity and setValidationStatus = partialValidated (if partial flag in POST method)ValidationStatus = validated (if not partial)Set ValidationChangeDate to current dateGet ChangeRequest From Reltio with ExternalInfoGet HCP id from ExternalInfoGet current Entity state from ReltioPrepare Country from current EntityGet Workplace data from Reltio entity and enrich the Workplaces HCO objects from Reltio using GET operation retrieve dataupdateHCP method inputHCP with ValidationStatus/ValidationChangeDate/CountryAffiliatedHCOs from Reltio (Workplaces that were get from ChangeRequest info)Exectue NewHCPDCRService#updateHCP LOGIC:Common updateHCP object method that updates HCP in Reltio and closes the DCRUsed in NEW_HCP.acceptDCR/rejectDCR/updateHCO method andGet ChangeRequest From Reltio with ExternalInfoGet HCP id from ExternalInfoGet the current Entity state from ReltioPrepare Country from current EntitySet EVR crosswalkSet ValidationStatus (validated) and ValidationChangeDate (current date) if missing / If not get from requestIf input AffiliatedHCOs exists (only when Workplaces are in request)mapAndCreateHCO (create HCOs in Reltio)execute modifyAffiliationStatusThis method checks if in Reltio all Workplaces were created and compares it to the list of Workplaces in ChangeRequest input objectset validated or notvalidated statuses on Workplace depending on found in ReltioThe result of these 2 methods are Workplaces created in Reltio with ValidationStatus parameterCreate HCP with affiliated Workplaces(optionally) in Reltio execute complex updateHCP method -> now data is created in ReltioRemove changes from ChangeRequests from Reltio because changes were applied manually and ChangeRequest had only a container for changes, we need to clear this to not apply it one more time.Apply ChangeRequest in Reltio CLOSEDCheck the merge entities parameter and merge entities.Reject: NEW_HCP rejectDCRCompose Entity and setValidationStatus = notvalidatedSet ValidationChangeDate to the current dateupdateHCP method inputHCP with ValidationStatus/ValidationChangeDate/CountryExecute NewHCPDCRService#updateHCPUpdateAffiliation: NEW_HCP updateAffilations logic:(input Entity object from Client)N/A for NEW_HCPUpdateHCO: NEW_HCP updateHCO logic:(input Entity object from Client)N/A for NEW_HCPUpdateHCP: NEW_HCP updateHCP:What is the difference between acceptDCR and updateHCP ?In accept we can set ValidationStatus to validated or partialValidate and we get all Workplaces from ReltioIn updateHCP we receive the HCPObject from client together with DCR Id. We can apply changes generated by the Client, not related to the ChangeRequest object that is currently in ReltioAt the end in both cases we close and accept the ChangeRequest(input Entity object from Client)Execute NewHCPDCRService#updateHCP method (described above)Check Action and do (FOR NEW_HCO):Accept: NEW_ HCO acceptDCRN/A only user can use this by HCP (updateHCP operation)Reject: NEW_HCO rejectDCRExecute _reject DCR Change Request is REJECTED in ReltioUpdateAffiliation: NEW_HCO updateAffilations logic:N/A for this requestUpdateHCO: NEW_HCO updateHCO logic:Get ChangeRequest From Reltio with ExternalInfoGet HCP id from ExternalInfoGet current Entity state from ReltioPrepare Country from current EntityGet List of Entities from Client request and execute the:COMPANY.mdm.manager.service.dcr.NewWorkplaceDCRService#updateAffiliationsLogic: (response = true)logic described aboveTrue logic activates the following:Create HCO 1 outside of DCR object created in ReltioCreate HCO 2 outside of DCR - object create in ReltioThen affiliations are made and an object created in Reltio (HCP with DCR id in Reltio with affiliations to already created objects in Reltio (HCO1 and HCO2) but the HCP still in DCR)UpdateHCP: NEW_HCO updateHCP logic:N/A for HCO dcrCheck Action and do (FOR NEW_WORKPLACE):Accept: NEW_WORKPLACE acceptDCRGet ChangeRequest From Reltio with ExternalInfoGet HCP id from ExternalInfoGet current Entity state from ReltioPrepare Country from current EntityGet List of Workplaces from the Change Request HCP entityCOMPANY.mdm.manager.service.dcr.NewWorkplaceDCRService#updateAffiliationsLogic: (response = true)logic described aboveTrue logic activates the following:Create HCO 1 outside of DCR object created in ReltioCreate HCO 2 outside of DCR - object create in ReltioThen affilaitions are made and object created in Reltio (HCP with DCR id in Reltio with affilaitions to already created objects in Reltio (HCO1 and HCO2) but the HCP still in DCR)Apply ChanteRequest in Reltio - ACCEPTEDReject: NEW_WORKPLACE rejectDCRApply Reltio Change Request with creation only HCP object in ReltioUpdateAffiliation: NEW_WORKPLACE updateAffilations logic:Same as acceptDCR but the Workplaces list is received from The Client requestUpdateHCO: NEW_WORKPLACE updateHCO logicN/AUpdateHCP: NEW_WORKPLACE updateHCP logic:N/Aupdate the PfDataChangeRequest object in Reltio to closedTriggersTrigger actionComponentActionDefault timeOperation linkDetailsREST callManager: POST/PATCH /hcpcreate specific objects in MDM systemAPI synchronous requests - realtimeCreate/Update HCP/HCO/MCOInitializes the DCR requestKafka Request DCRManager: Push Kafka DCR eventpush Kafka DCR EventKafka asynchronous event - realtimeDCR IQVIA flowPush DCR event to DCR processorKafka Request DCRDCRServiceRoute: Poll Kafka DCR evenConsumes Kafa DCR eventsKafka asynchronous event - realtimeDCR IQVIA flowPoll/Consumes DCR events and process itRest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/acceptupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to accept DCRRest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/updateHCPupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to update HCP through DCRRest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/updateHCOupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to update HCO through DCRRest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/updateAffiliationsupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to update HCO to HCO affiliations through DCRRest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/rejectupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to reject DCRRest call - DCR responseManager:DCRResponseRoute POST /dcrResponse/{id}/mergeupdates DCR by API (accept/reject e.tc.)API synchronous requests - realtimeDCR IQVIA flowAPI to merge DCR HCP entitiesDependent componentsComponentUsageManagersearch entities in MDM systemsAPI Gatewayproxy REST and secure accessReltioReltio MDM systemManagerOld legacy DCR processor"
},
{
"title": "DCR COMPANY flow",
"pageID": "284800570",
"pageLink": "/display/GMDM/DCR+COMPANY+flow",
"content": "DescriptionTBD Flow diagram (drafts)StepsTBDTriggersTrigger actionComponentActionDefault timeDependent componentsComponentUsage"
},
{
"title": "Model Mapping (IQVIA<->COMPANY)",
"pageID": "284800575",
"pageLink": "/pages/viewpage.action?pageId=284800575",
"content": "DescriptionThe interface is used to map MDM Entities between IQIVIA and COMPANY model.Flow diagram-MappingAddress ↔ Addresses attribute mappingIQIVIA MODEL ATTRIBUTE [Address]COMPANY MODEL ATTRIBUTE [Addresses]AddressPremiseAddressesPremiseAddressBuildingAddressesBuildingAddressVerificationStatusAddressesVerificationStatusAddressStateProvinceAddressesStateProvinceAddressCountryAddressesCountryAddressAddressLine1AddressesAddressLine1AddressAddressLine2AddressesAddressLine2AddressAVCAddressesAVCAddressCityAddressesCityAddressNeighborhoodAddressesNeighborhoodAddressStreetAddressesStreetAddressGeolocationLatitudeAddressesLatitudeAddressGeolocationLongitudeAddressesLongitudeAddressGeolocationGeoAccuracyAddressesGeoAccuracyAddressZipZip4AddressesZip4AddressZipZip5AddressesZip5AddressZipPostalCodeAddressesPOBoxPhone attribute mappingsIQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTEPhoneLineTypePhoneLineTypePhoneLocalNumberPhoneLocalNumberPhoneNumberPhoneNumberPhoneFormatMaskPhoneFormatMaskPhoneGeoCountryPhoneGeoCountryPhoneDigitCountPhoneDigitCountPhoneCountryCodePhoneCountryCodePhoneGeoAreaPhoneGeoAreaPhoneFormattedNumberPhoneFormattedNumberPhoneAreaCodePhoneAreaCodePhoneValidationStatusPhoneValidationStatusPhoneTypeIMSPhoneTypePhoneActivePhonePrivacyOptOutEmail attribute mappingsIQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTEEmailEmailEmailDomainEmailDomainEmailDomainTypeEmailDomainTypeEmailValidationStatusEmailValidationStatusEmailTypeIMSEmailTypeEmailActiveEmailPrivacyOptOutEmailUsernameEmailSourceSourceNameHCO mappingsIQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTECountryCountryNameNameTypeCodeTypeCodeSubTypeCodeSubTypeCodeCMSCoveredForTeachingCMSCoveredForTeachingCommentersCommentersCommHospCommHospDescriptionDescriptionFiscalFiscalGPOMembershipGPOMembershipHealthSystemNameHealthSystemNameNumInPatientsNumInPatientsResidentProgramResidentProgramTotalLicenseBedsTotalLicenseBedsTotalSurgeriesTotalSurgeriesVADODVADODAcademicAcademicKeyFinancialFiguresOverviewSalesRevenueUnitOfSizeKeyFinancialFiguresOverviewSalesRevenueUnitOfSizeClassofTradeNSpecialtyClassofTradeNSpecialtyClassofTradeNClassificationClassofTradeNClassificationIdentifiersIDIdentifiersIDIdentifiersTypeIdentifiersTypeSourceNameOriginalSourceNameNumOutPatientsOutPatientsNumbersStatusValidationStatusUpdateDateSourceUpdateDateWebsiteURLWebsiteWebsiteURLOtherNames-OtherNamesName-Type (constant: OTHER_NAMES)OfficialName-OtherNamesName-Type (constant: OFFICIAL_NAME)Address*Addresses*Phone*Phone*HCP mappingsIQIVIA MODEL ATTRIBUTECOMPANY MODEL ATTRIBUTEDESCRIPTIONCountryCountryDoBDoBFirstNameFirstNamecase: (IQVIA -> COMPANY), if IQIVIA(FirstName) is empty then IQIVIA(Name) is used as COMPANY(FirstName) mapping resultLastNameLastNamecase: (IQVIA -> COMPANY), if IQIVIA(LastName) is empty then IQIVIA(Name) is used as COMPANY(LastName) mapping resultNameNameNickNameNickNameGenderGenderPrefferedLanguagePrefferedLanguagePrefixPrefixSubTypeCodeSubTypeCodeTitleTitleTypeCodeTypeCodePresentEmploymentPresentEmploymentCertificatesCertificatesLicenseLicenseIdentifiersIDIdentifiersIDIdentifiersTypeIdentifiersTypeUpdateDateSourceUpdateDateSourceNameSourceValidationSourceNameValidationChangeDateSourceValidationChangeDateValidationStatusSourceValidationStatusSpeakerSpeakerLevelSpeakerLevelSpeakerSpeakerTypeSpeakerTypeSpeakerSpeakerStatusSpeakerStatusSpeakerIsSpeakerIsSpeakerDPPresenceChannelCodeDigitalPresenceChannelCodeMETHOD PARAM<Workplaces>ContactAffiliationscase: (IQVIA -> COMPANY), param workplaces is converted to HCO and added to ContactAffiliationsMETHOD PARAM<MainWorkplaces>ContactAffiliationscase: (IQVIA -> COMPANY), param main workplaces are converted to HCO and added to ContactAffiliationsWorkplaceMETHOD PARAM<Workplaces>case: (COMPANY → IQIVIA), param workplaces is converted to HCO and assigned to WorkplaceMainWorkplaceMETHOD PARAM<MainWorkplaces>case: (COMPANY → IQIVIA),  param main workplaces are converted to HCO and assigned to MainWorkplaceAddress*Addresses*Phone*Phone*Email*Email*TriggersTrigger actionComponentActionDefault timeMethod invocationHCPModelConverter.classtoCOMPANYModel(EntityKt  iqiviaModel, List<EntityKt> workplaces, List<EntityKt> mainWorkplaces, List<AttributeValueKt> addresses)realtimeMethod invocationHCPModelConverter.classtoCOMPANYModel(EntityKt  iqiviaModel, List<EntityKt> workplaces, List<EntityKt> mainWorkplaces)realtimeMethod invocationHCPModelConverter.classtoIqiviaModel(EntityKt  COMPANYModel, List<EntityKt> workplaces, List<EntityKt> mainWorkplaces)realtimeMethod invocationHCOModelConverter.classtoCOMPANYModel(EntityKt iqiviaModel)realtimeMethod invocationHCOModelConverter.classtoIqiviaModel(EntityKt  COMPANYModel)realtimeDependent componentsComponentUsagedata-modelMapper uses models to convert between them"
},
{
"title": "User Profile (China user)",
"pageID": "284800562",
"pageLink": "/pages/viewpage.action?pageId=284800562",
"content": "DescriptionUser profile got new attributes used in V2 API.AttributeDescriptionsearchConfigHcpApiconfig search entity service for HCP API - contains HCO/MAIN_HCO search entity type configurationsearchConfigHcoApiconfig search entity service for HCO APIsearcherTypetype of searcher implementationavailable values: [UriEntitySearch/CrosswalkEntitySearch/AttributesEntitySearch]attributesattribute names used in AttributesEntitySearchtriggerTypeV2 HCP/HCO complex API trigger configuration - action executed when there are missing entities in requestavailable values: [REJECT/IGNORE/DCR/CREATE]crosswalkGeneratorConfigauto-create entity crosswalk - if missing in requestcrosswalkGeneratorTypetype of crosswalk generator, available values: [UUID]typeauto-generated crosswalk type valuesoruceTableauto-generated crosswalk source table valuesourceModelsource model of entity provided by user for V2 HCP/HCO complex,available values: [COMPANY,IQIVIA] Flow diagramTBDStepsTBDTriggersTrigger actionComponentActionDefault timeDependent componentsComponentUsage"
},
{
"title": "User",
"pageID": "284811104",
"pageLink": "/display/GMDM/User",
"content": "The user is configured with a profile that is shared between all MDM services. Configuration is provided via yaml files and loaded at boot time. To use the profile in any application, import the com.COMPANY.mdm.user.UserConfiguration configuration from the mdm-user module. This operation will allow you to use the UserService class, which is used to retrieve users.User profile configurationattributedescriptionnameuser namedescriptionuser descriptiontokentoken used for authenticationgetEntityUsesMongoCacheretrive entity from mongo cache in get entity operationlookupsUseMongoCacheretrive lookups from mongo cache in LookupServicetrimtrimming entities/relationships in response to the clientguardrailsEnabledcheck if contributor provider crosswalk exists with data provider crosswalkrolesuser permissionscountriesuser allowed countriessourcesuser allowed crosswalksdefaultClientdefault mdm client namevalidationRulesForValidateEntityServicevalidation rules configurationbatchesuser allowed batches configurationdefaultCountryuser default country, used in api-router, when country is not provided in requestoverrideZonesuser country-zone configuration that overwrites default api-router behaviorkafkauser kafka configuration, used in kafka management servicereconciliationTargetsreconciliation targets, used in event resend service"
},
{
"title": "Country Cluster",
"pageID": "234715057",
"pageLink": "/display/GMDM/Country+Cluster",
"content": "General assumptionsMDM HUB will be populating country cluster information.Initially, only default cluster country will be sent. In future, other clusters can be calculated and distributed to downstream clients.In the current phase, the default clustering model is based on OneKey country clustering.Changes are backward compatible for downstream systems if they are not interested in consuming the cluster information.defaultCountrycluster is an optional attribute. In case of lack of mapping, it will not be included in JSON .Example of mapping: CountrycountryClusterAndorra (AD)France (FR)Maroco (MC)France (FR)Changes in MDM HUB1. Enrichment of  Kafka events  with extra parameter defaultClusterCoutryIt will be calculated based on a new config table that maps countries to cluster countriesconfiguration table must be implemented on MDM Publisher sideIt can be used in routing rules in filtering events based on defaultCountryCluster2. Add a new column COUNTRY_CLUSTER representing the default country cluster  in views:ENTITIES, HCO, HCP, ENTITY_UPDATE_DATES, MDM_ENTITY_CROSSWALKSAdd country cluster config table 3. Handling cluster country sent by PforceRx in DCR process in a transparent wayIf a new entity then the country will be set based on the address country.If an entity exists then the country will be set based on the existing country in ReltioChange in the event model{  "eventType": "HCP_CHANGED",  "eventTime": 1514976138977,  "countryCode": "MC",  “defaultCountryCluster": "FR",   "entitiesURIs": ["entities/ysCkGNx“  ] ,  "targetEntity":  {  "uri": "entities/ytY3wd9",  "type": "configuration/entityTypes/HCP",Changes on client-sideMULEMULE must map defaultCountryCluster to country sent to PforceRx in the GRV pipeline.ODSODS ETL process must use column cluster_country instead of country while reading data from Snowflake"
},
{
"title": "Create/Update HCP/HCO/MCO",
"pageID": "164470018",
"pageLink": "/pages/viewpage.action?pageId=164470018",
"content": "DescriptionThe REST interfaces exposed through the MDM Manager component used by clients to update or create HCP/HCO/MCO objects. The update process is supported by all connected MDMs Reltio and Nucleus360 with some limitations. At this moment Reltio MDM is fully supported for entity types: HCP, HCO, MCO. The Nucleus360 supports only the HCP update process. The decision which MDM should be selected to process the update request is controlled by configuration. Configuration map defines country assignment to MDM which stores country's data. Based on this map, MDM Manager selects the correct MDM system to forward the update request.The difference between Create and Update operations is the additional API request during the update operation. During the update, an entity is retrieved from the MDM by the crosswalk value for validation purposes. Diagrams 1 and 2 presents standard flow. On diagrams 3, 4, 5, 6 additional logic is optional and activated once the specific condition or attribute is provided. The diagrams below present a sequence of steps in processing client calls.Update 2023-09:To increase Update HCP/HCO/MCO performance, the logic was slightly altered:ContributorProvider crosswalk is now looked up in MDM Hub Cache Databaseif entity not found by this crosswalk, fallback lookup using MDM APIafter confirming that the ContributorProvider crosswalk exists in MDM, add "partialOverride" to the request and continue processing with the Create HCP/HCO/MCO logicFlow diagram1Create HCP/HCO/MCO2 Update HCP/HCO/MCO3 (additional optional logic) Create/Update HCO with ParentHCO 4 (additional optional logic) Create/Update HCP with AffiliatedHCO&Relation5 (additional optional logic) Create/Update HCO with ParentHCO 6 (additional optional logic) Create/Update HCP with source crosswalk replace StepsThe client sends HTTP request to MDM Manager endpoint.Kong API Gateway receives requests and handles authentication.If the authentication succeeds, the request is forwarded to MDM Manager component.MDM Manager checks user permissions to call createEntity (HCP/HCO/MCO) operation and the correctness of the request.If the user's permissions are correct, MDM Manager proceeds with creating the specific object and returns created object in MDM to the Client.During partialUpdate before update entity is retrieved from MDM.Additional logic will be activated in the following cases:3 - during HCO update parentHCO attribute is delivered in the request4 - during HCP create/update affiliations are delivered in the request5 - during HCP/HCO creation based on the configuration-specific sources are enriched with cached Relation objects and this object is injected to the main Entity as the reference attribute6 - during HCP create/update when conditions are met, source crosswalk is replaced from MAPP to MAPP_ATTENDEETriggersTrigger actionComponentActionDefault timeREST callManager: POST/PATCH /hco /hcp /mcocreate specific objects in MDM systemAPI synchronous requests - realtimeDependent componentsComponentUsageManagercreate update Entities in MDM systemsAPI Gatewayproxy REST and secure accessReltioReltio MDM systemNucleusNucleus MDM system"
},
{
"title": "Create/Update Relations",
"pageID": "164469796",
"pageLink": "/pages/viewpage.action?pageId=164469796",
"content": "DescriptionThe operation creates or updates the Relation of MDM Manager manages the relations in the Reltio MDM system. User can update the specific relation using a crosswalk to match or create a new object using unique crosswalks and information about start and end objectThe detailed process flow is shown below.Flow diagramCreate/Update RelationStepsThe client sends HTTP requests to the MDM Manager endpoint.Kong Gateway receives requests and handles authentication.If the authentication succeeds, the request is forwarded to the MDM Manager component.MDM Manager checks user permissions to call createRelation/updateRelation operation and the correctness of the request.If the user's permissions are correct, MDM Manager proceeds with the create/update operation.OPTIONALLY: after successfully update (ResponseStatus != failed), relations are cached in the MongoDB, the relations are then reused in the ReferenceAttributeEnrichment Service (currently configured for the GBLUS ONEKEY Affiliations). This is required to enrich these relations to the HCP/HCO objects during the update, this prevents losing reference attributes duringHCP create operation.OPTIONALLY: PATCH operation adds the PARTIAL_OVERRIDE header to Reltio switching the request to the partial update operation.TriggersTrigger actionComponentActionDefault timeREST callManager: POST/PATCH/relationscreate or updates the Relations in MDM systemAPI synchronous requests - realtimeDependent componentsComponentUsageManagercreate or updates the Relations in MDM system"
},
{
"title": "Create/Update/Delete tags",
"pageID": "172295228",
"pageLink": "/pages/viewpage.action?pageId=172295228",
"content": "The REST interfaces exposed through the MDM Manager component used by clients to update, delete or create tags assigned to entity objects. Difference between create and update is that tags are added and if the option returnObjects is set to true all previously added and new tags will be returned. Delete action removes one tag.The diagrams below present a sequence of steps in processing client calls.Flow diagramCreate tagUpdate tagDelete tagStepsThe client sends HTTP request to MDM Manager endpoint.Kong API Gateway receives requests and handles authentication.If the authentication succeeds, the request is forwarded to MDM Manager component.MDM Manager checks user permissions to call createEntityTags operation and the correctness of the request.If the user's permissions are correct, MDM Manager proceeds with creating tags for entity and returns created tags in MDM to the Client.TriggersTrigger actionComponentActionDefault timeREST callManager: POST/PATCH/DELETE /entityTagscreate specific objects in MDM systemAPI synchronous requests - realtimeDependent componentsComponentUsageManagercreate update delete Entity Tags in MDM systemsAPI Gatewayproxy REST and secure accessReltioReltio MDM system"
},
{
"title": "DCR flows",
"pageID": "415205424",
"pageLink": "/display/GMDM/DCR+flows",
"content": "\n\n\n\nOverviewDCR (Data Change Request) process helps to improve existing data in source systems. Proposal for change is being created by source systems a as DCR object (sometimes also called VR - Validation Request) which is usually being routed by MDM HUB to DS (Data Stewards) either in Reltio or in Third party validators (OneKey, Veeva OpenData). Response is provided twofold:response for specific DCR - metadataprofile data update as a direct effect of a DCR processing - payloadGeneral DCR process flow High level solution architecture for DCR flowSource: Lucid\n\n\n\n\n\nSolution for OneKey (OK)\n\n\n\nSolution for Veeva OpenData (VOD)\n\n\n\n\n\nArchitecture highlightsActors involved: PforceRX, Reltio, HUB, OneKeyKey components: DCR Service 2 (second version) for AMER, EMEA, APAC, US tenantsProcess details:DCRs are created directly by PforceRx using DCR's HUB APIPforceRx checks for DCR status updates every 24h → finds out which DCRs has been updated (since last check 24h ago) and the pulls details from each one with /dcr/_status Integration with OneKey is realized by APIs - DCRs are created with /vr/submit and their status is verified every 8h with /vr/traceData profile updates (payload) are being delivered via CSV and S3 and ETLed (VOD batch) to Reltio with COMPANY's helpDCRRegistry & DCRRegistryVeeva collections are used in Mongo for tracking purposes\n\n\n\nArchitecture highlightsActors involved: Data Stewards in Reltio, HUB, Veeva OpenData (VOD)Key components: DCR Service 2 (second version) for AMER, EMEA, APAC, US tenantsProcess details:DCRs are created by Data Stewards (DSRs) in Reltio via Suggest / Send to 3rd Party Validation - input for DSRs is being provided by reports from PforceRxCommunication with Veeva via S3<>SFTP and synchronization GMTF jobs. DCRs are sent and received in batches every 24h DCRs metadata is being exchanged via multiple CSV files ZIPedData profile updates (payload) are being delivered via CSV and S3 and ETLed (VOD batch) to Reltio with COMPANY's help  DCRRegistry & DCRRegistryONEKEY collections are used in Mongofor tracking purposes\n\n\n\n\n\nSolution for IQVIA Highlander (HL) \n\n\n\nSolution for OneKey on GBLUS - sources ICEU, Engage, GRV\n\n\n\n\n\nArchitecture highlightsActors involved: Veeva on behalf of PforceRX, Reltio, HUB, IQVIA wrapperKey components: DCR Service (first version) for GBLUS tenantProcess details:DCRs are created by sending CSV requests by Veeva - based on information acquired from PforceRxIntegration HUB <> Veeva → via files and S3<>SFTP. HUB confirms DCR creation by returning file reports back to VeevaIntegration HUB <> IQVIA wrapper → via files and S3HUB is responsible for translation of Veeva DCR CSV format to IQVIA CSV wrapper which then creates DCR in ReltioData Stewards approve or reject the DCRs in Reltio which updates data profiles accordingly. PforceRx receives update about changes in ReltioDCRRequest collection is used in Mongo for tracking purposes\n\n\n\nArchitecture highlights (draft)Actors involved: HUB, IQVIA wrapperKey components: DCR Service (first version) for GBLUS tenantProcess details:POST events from sources are captured - some of them are translated to direct DCRs, some of them are gathered and then pushed via flat files to be transformed into DCRs to OneKey \n\n\n"
},
{
"title": "DCR generation process (China DCR)",
"pageID": "164470008",
"pageLink": "/pages/viewpage.action?pageId=164470008",
"content": "The gateway supports following DCR types:NewHCP created when new HCP is registered in Reltio and requires external validationNewHCOL1 created when HCO Level 1 not found in ReltioNewHCOL2 created when HCO Level 2 not found in ReltioMultiAffil created when a profile has multiple affiliations DCR generation processes are handled in two steps:During HCP modification if initial activation criteria are met, then a DCR request is generated and published to KAFKA <env>-gw-dcr-requests topic.In the next step, the internal Camel route DCRServiceRoute reads requests generated from the topic and processes as follows:checks if the time specified by delayPrcInSeconds elapsed since request generation it makes sure that Reltio batch match process has finished and newly inserted profiles merge with the existing ones.checks if an entity, that caused DCR generation, still exists;checks full activation criteria (table below) on the latest state of the target entity, if criteria are not met then the request is closedcreates DCR in Reltioupdates external infocreates COMPANYDataChangeRequest entity in Reltio for tracking and exporting purposes.Created DCRs are exported by the Informatica ETL process managed by IQIVIADCR applying process (reject/approve actions) are executed through MDM HUB DCR response API executed by the external app manged by MDE team.The table below presents DCR activation criteria handled by system.Table 9. DCR activation criteriaRuleNewHCPMultiAffiliationNewHCOL2NewHCOL1Country inCNCNCNCNSource inGRVGRV, MDE, FACE, EVR, CN3RDPARTYGRV, FACE, CN3RDPARTYGRV, FACE, CN3RDPARTYValidationStatus inpending, partial-validatedor, if merged:OV: notvalidated, GRV nonOV: pending/partial-validatedvalidated, pendingvalidated, pendingvalidated, pendingSpeakerStatus inenabled, nullenabled, nullenabled, nullenabled, nullWorkplaces count>1Hospital foundtruetruefalsetrueDepartment foundtruetruefalseSimilar DCR created in the pastfalsefalsefalsefalseUpdate: December 2021NewHCP DCR is now created if ValidationStatus is pending or partial-validatedNewHCP DCR is also created if OV ValidationStatus is notvalidated, but most-recently updated GRV crosswalk provides non-ov ValidationStatus as pending or partial-validated - in case HCP gets merged into another entity upon creation/modification:DCR request processing history is now available in Kibana via Transaction Log - dashboard API Calls, transaction type "CreateDCRRoute"DCR response processing history (DCR approve/reject flow) is now available in Kibana via Transaction Log - dashboard API Calls, transaction type "DCRResponse""
},
{
"title": "HL DCR [Decommissioned April 2025]",
"pageID": "164470085",
"pageLink": "/pages/viewpage.action?pageId=164470085",
"content": "ContactsVendorContactPforceRXDL-PForceRx-SUPPORT@COMPANY.comIQVIA (DCR Wrapper)COMPANY-MDM-Support@iqvia.com As a part of Highlander project, the DCR processing flow was created which realizes following scenarios:Update HCP account details i.e. specialty, address, name (different sources of elements),Add new HCP account with primary affiliation to an existing organization,Add new HCP account with a new business account,Update HCP and add affiliation to a new HCO,Update HCP account details and remove existing details i.e. birth date, national id, …,Update HCP account and add new non primary affiliation to an existing organization,Update HCP account and add new primary affiliation to an existing organization,Update HCP account inactivate primary affiliation. Person account has more than 1 affiliation,Update HCP account inactivate non primary affiliation. Person account has more than 1 affiliation,Inactivate HCP account,Update HCP and add a private address,Update HCP and update existing private address,Update HCP and inactivate a private address,Update HCO details i.e. address, name (different sources of elements),Add new HCO account,Update HCO and remove details,Inactivate HCO account,Update HCO address,Update HCO and add new address,Update HCO and inactivate address,Update HCP's existing affiliation.Above cases has been aggregated into six generic types in internal HUB model:NEW_HCP_GENERIC - represents cases when the new HCP object is created with or without affiliation to HCO,UPDATE_HCP_GENERIC - aggregates cases when the existing HCP object is changed,DELETE_HCP_GENERIC - represents the case when HCP is deactivating,NEW_HCO_GENERIC - aggregates scenarios when new HCO object is created with or without affiliations to parent HCO,UPDATE_HCO_GENERIC - represents cases when existing HCO object is changing,DELETE_HCO_GENERIC - represents the case when HCO is deactivating.General Process OverviewProcess steps:Veeva uploads DCR request file to FTP location,PforceRx Channel component downloads the DCR request file,PforceRx Channel validates and maps each DCR requests to internal model,PforceRx Channel sends the request to DCR Service,DCR Service process the request: validating, enriching and mapping to Iqvia DCR Wrapper,PforceRx Channel prepares the report file containing technical status of DCR processing - at this time, report will contain only requests which don't pass the validation,Scheduled process in DCR Service, prepares the Wrapper requests file and uploads this to S3 location.DCR Wrapper processes the file: creating DCRs in Reltio or rejecting the request due to errors. After that the response file is published to s3 location,DCR Service downloads the response and updates DCRs status,Scheduled process in PforceRx Channel gets DCR requests and prepares next technical report - at this time the report has technical status which comes from DCR Wrappper,DCRs that was created by DCR Wrapper are reviewed by Data Stewards. DCR can be accepted or rejected,After accepting or rejecting DCR, Reltio publishes the message about this event,DCR Service consumes the message and updates DCR status,PforceRx Channel gets DCR data to prepare a response file. The response file contains the final status of DCRs processing in Reltio.Veeva DCR request file specificationThe specification is available at following location:https://COMPANY-my.sharepoint.com/:x:/r/personal/chinj2_COMPANY_com/Documents/Mig%20In-Prog/Highlander/PMO/09%20Integration/LATAM%20Reltio%20DCR/DCR_Reltio_T144_Field_Mapping_Reltio.xlsxDCR Wrapper request file specificationThe specification is available at following link:https://COMPANY.sharepoint.com/:x:/r/sites/HLDCR/Shared%20Documents/ReltioCloudMDM_LATAM_Highlander_DCR_DID_COMPANY__DEVMapping_v2.1.xlsx"
},
{
"title": "OK DCR flows (GBLUS)",
"pageID": "164469877",
"pageLink": "/pages/viewpage.action?pageId=164469877",
"content": "DescriptionThe process is responsible for creating DCRs in Reltio and starting Change Requests Workflow for singleton entities created in Reltio. During this process, the communication to IQVIA OneKey VR API is established.  SubmitVR operation is executed to create a new Validation Request. The TraceVR operation is executed to check the status of the VR in OneKey. All DCRs are saved in the dedicated collection in HUB Mongo DB, required to gather metadata and trace the changes for each DCR request. Some changes can be suggested by the DS using "Suggest" operation in Reltio and "Send to Third Party Validation" button, the process "Data Steward OK Validation Request" is processing these changes and sends them to the OneKey service. The process is divided into 4 sections:Submit Validation RequestTrace Validation RequestData Steward ResponseData Steward OK Validation RequestThe below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.Flow diagramModel diagramStepsSubmitVRThe process of submitting VR is triggered by the Reltio events. The process aggregates events in a time window and once the window is closed the processing is started.During SubmitVR process checks are executed, getMatches operation in Relto is invoked to verify potential matches for the singleton entities. Once all checks are correct new submitVR request is created in OneKey and DCR is saved in Reltio and in Mongo Cache.TraceVRThe process of tracing VR is triggered each <T> hours on Mongo DCR cache collection.For each DCR the traceVR operation is executed in OneKey to verify the current status for the specific validation request.Once the checks are correct the DCR is updated in Reltio and in Mongo Cache.Data Steward ResponseThe process is responsible for gathering changes on Change Requests objects from Reltio, the process is only accepting events without the ThirdPartyValidation flagBased on the received change invoked by the Data Steward DCR is updated in Reltio and in Mongo CacheData Steward OK Validation RequestThe process is responsible for processing changes on Change Requests objects from Reltio, the process is only accepting events with the ThirdPartyValidation flag. This event is generated after DS clicks the "Send to Third Party Validation" button in Reltio. The DS is "Suggesting" changes on the specified profile, these changes are next sent to HUB with the DCR event. The changes are not visible in Retlio, it is just a container that keeps the changes.HUB is retrieving the "Preview" state from Reltio and calculating the changes that will send to OneKey WebService using submitVR operationAfter successful submitVR response HUB is closing/rejecting the existing DCR in Reltio. The _reject operation has to be invoked on the current DCR in Reltio because the changes should no be applied to the profile. Changes are now validating in the OneKey system, and appropriate steps will be taken in the next phase (export changed data to Reltio or reject suggestion).TriggersDescribed in the separated sub-pages for each process.Dependent componentsDescribed in the separated sub-pages for each process."
},
{
"title": "Data Steward OK Validation Request",
"pageID": "172306908",
"pageLink": "/display/GMDM/Data+Steward+OK+Validation+Request",
"content": "DescriptionThe process the DS suggested changes based on the Change Request events received from Reltio(publishing) that are marked with the ThirdPartyValidation flag. The "suggested" changes are retrieved using the "preview" method and send to IQVIA OneKey or Veeva OpenData for validation. After successful submitVR response HUB is closing/rejecting the existing DCR in Reltio and additionally creates a new DCR object with relation to the entity in Reltio for tracking and status purposes. Because of the ONEKEY interface limitation, removal of attributes is send to IQVIA as a comment.Flow diagramStepsEvent publisher publishes full enriched events to $env-internal-[onekeyvr|thirdparty]-ds-requests-in: DCR_CHANGED("CHANGE_REQUEST_CHANGED") and DCR_CREATED("CHANGE_REQUEST_CREATED")Only events with ExternalInfo and ThirdPartyValidation flag set to true and the Change Requests status equal to AWAITING_REVIEW are accepted in this process, otherwise, the event is rejected and processing ends.HUB DCR Cache is verified if any ReltioDCR requests exist and are not in a FAILED status, then processing goes to the next step.DCR request that contains targetChangeRequest is enriched with the current Entity data using HUB CacheVeeva specific: The entity is checked, If no VOD crosswalk exists, then "golden profile" parameters should be used with below logicThe entity is checked, If active [ONEKEY|VOD] crosswalk exists the following steps are executed:The suggested state of the entity is retrieved from Reltio using the getEntityWithChangeRequests operation (parameters - entityUri and the changeRequestId from the DCR event). Current Entity and Preview Entity are compared using the following rules: (full attributes that are part of comparing process are described here)Simple attributes (like FirstName/LastName):Values are compared using the equals method. if differences are found the suggested value is taken. If no differences are found for mandatory, the current value is takenfor optional, the none value is taken (null)Complex attributes (like Specialties/Addresses):Whole nested attributes are matched using Reltio "uri" attributes key.If there is a new Specialty/Address, the new suggested nested attribute is takenVeeva specific: If there is a new Specialty/Addresses/Phone/Email/Medical degree*/HCP Focus area*, the new suggested nested attribute is taken. Since Veeva uses flat structure for these attributes, we need to calculate specialty attribute number (like specialty_5__v) to use when sending request. Attribute number = count of existing attributes +1.If there is no new Specialty/Address and there is a change in the existing attribute, the suggested nested change is taken. If there are multiple suggested changes, the one with the highest Rank is taken.If there are no changesfor mandatory, the current nested attribute that is connected with the ONEKEY crosswalk is taken.for optional, the none nested attribute is taken (no need to send)Contact Affiliations / OtherHCOtoHCOAffiliation:If there are no changes, return current listIf there is new Contact Affiliation with ONEKEY crosswalk, add it to current listAdditional checks:If there are changes associated with the other source (different than the [ONEKEY|VOD]), then these changes are ignored and the VR is saved in Reltio with comment listing what attributes were ignored e.g.: "Attributes: [YoB: 1956], [Email: engagetest123@test.com] ignored due to update on non-[onekey|VOD] attribute."If attribute associated with [ONEKY|VOD] source is removed, a comment specifying what should be removed on [ONEKY|VOD] side is generated and sent to [ONEKY|VOD], e.g.: "Please remove attributes: [Address: 10648 Savannah Plantation Ct, 32832, Orlando, United States]."DCRRequest object is created in Mongo for the flow state recording and generation of the new unique DCR ID for validation requests and data tracing.DCR cache attributesValues for IQVIAValues for OKValues for Veeva (R1)typeOK_VRPFORCERX_DCRRELTIO_SUGGESTstatusDCRRequestStatusDetails (DCRRequestStatus.NEW, currentDate)createdByonekey-dcr-serviceUser which creates DCR via Suggest button in ReltioUser which creates DCR via Suggest button in ReltiodatenowSendTo3PartyValidationtrue (flag that indicates the DCR objects created by this process)Calculated changes are mapped to the OneKey submitVR Request and it's submitted using API REST method POST /vr/submit.Veeva specific:  submitting DCR request to Veeva requires creation of ZIPed CSV files with agreed structure and placed on S3 bucketIf the submission is successful then:DCRRequest.status is updated to SENT with [OK|VOD] request and response details DCR entity is created in Reltio and the relation between the processed entity and the DCR entityReltio source name (crosswalk.type): DCRReltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)DCR entity attributes:DCR entity attributesMapping for OneKeyMapping for VeevaDCRIDOK VR Reqeust Id (cegedimRequestEid)ID assigned by MDM HUB EntityURIthe processed entity URIVRStatus"OPEN"VRStatusDetail"SENT"Commentsoptionally commentsSentDatecurrent timeSendTo3PartyValidationtrueOtherwise (FAILED)DCRRequest.status is updated to FAILED with OK request and exception response details DCR entity is created in Reltio and the relation between the processed entity and the DCR entityReltio source name (crosswalk.type): DCRReltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)DCR entity attributes:DCR entity attributesMappingDCRIDOK VR Reqeust Id (cegedimRequestEid)EntityURIthe processed entity URIVRStatus"CLOSED"VRStatusDetail"FAILED"CommentsONEKEY service failed [exception details]SentDatecurrent timeSendTo3PartyValidationtrueThe current DCR object in Reltio is closed using the _reject operation - POST - /changeRequests/<id>/_rejectOtherwise, If ONEKEY crosswalk does not exist, or the ONEKEY crosswalk is soft-deleted, or entity is EndDated: the following steps are executed:DCRRequest object is created in Mongo for the flow state recording and generation of the new unique DCR ID for validation requests and data tracing.DCR cache attributesvaluestypeDCRType.OK_VRstatusDCRRequestStatusDetails (DCRRequestStatus.NEW, currentDate)created byonekey-dcr-servicedatenowSendTo3PartyValidationtrue (flag that indicates the DCR objects created by this process)DCRRequest.status is updated to FAILED and comment "No OK crosswalk available"DCR entity is created in Reltio and the relation between the processed entity and the DCR entityReltio source name (crosswalk.type): DCRReltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)DCR entity attributes:DCR entity attributesMappingDCRIDOK VR Reqeust Id (cegedimRequestEid)EntityURIthe processed entity URIVRStatus"CLOSED"VRStatusDetail"REJECTED"CommentsNo ONEKEY crosswalk availableCreatedByMDM HUBSentDatecurrent timeSendTo3PartyValidationtrueThe current DCR object in Reltio is closed using the _reject operation - POST - /changeRequests/<id>/_rejectEND ONEKEY Comparator (suggested changes)HCPReltio AttributeONEKEY attributemandatory typeattribute typeFirstNameindividual.firstNameoptionalsimple valueLastNameindividual.lastNamemandatorysimple valueCountryisoCod2mandatorysimple valueGenderindividual.genderCodeoptionalsimple lookupPrefixindividual.prefixNameCodeoptionalsimple lookupTitleindividual.titleCodeoptionalsimple lookupMiddleNameindividual.middleNameoptionalsimple valueYoBindividual.birthYearoptionalsimple valueDobindividual.birthDayoptionalsimple valueTypeCodeindividual.typeCodeoptionalsimple lookupPreferredLanguageindividual.languageEidoptionalsimple valueWebsiteURLindividual.websiteoptionalsimple valueIdentifier value 1individial.externalId1optionalsimple valueIdentifier value 2individial.externalId2optionalsimple valueAddresses[]address.countryaddress.cityaddress.addressLine1address.addressLine2address.Zip5mandatorycomplex (nested)Specialities[]individual.speciality1 / 2 / 3optionalcomplex (nested)Phone[]individual.phoneoptionalcomplex (nested)Email[]individual.emailoptionalcomplex (nested)Contact Affiliations[]workplace.usualNameworkplace.officialNameworkplace.workplaceEidoptionalContact AffiliationONEKEY crosswalkindividual.individualEidmandatoryIDHCOReltio AttributeONEKEY attributemandatory typeattribute typeNameworkplace.usualNameworkplace.officialNameoptionalsimple valueCountryisoCod2mandatorysimple valueOtherNames.Nameworkplace.usualName2optionalcomplex (nested)TypeCodeworkplace.typeCodeoptionalsimple lookupWebisteWebsiteURLworkplace.websiteoptionalcomplex (nested)Addresses[]address.countryaddress.cityaddress.addressLine1address.addressLine2address.Zip5mandatorycomplex (nested)Specialities[]workplace.speciality1 / 2 / 3optionalcomplex (nested)Phone[] (!FAX)workplace.telephoneoptionalcomplex (nested)Phone[] (FAX)workplace.faxoptionalcomplex (nested)Email[]workplace.emailoptionalcomplex (nested)ONEKEY crosswalkworkplace.workplaceEidmandatoryIDTriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-onekey-dcr-service:ChangeRequestStreamprocess publisher full change request events in the stream that contain ThirdPartyValidation flagrealtime: events stream processing Dependent componentsComponentUsageOK DCR ServiceMain component with flow implementationVeeva DCR ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsHub StoreDCR and Entities Cache "
},
{
"title": "Data Steward Response",
"pageID": "164469841",
"pageLink": "/display/GMDM/Data+Steward+Response",
"content": "DescriptionThe process updates the DCR's based on the Change Request events received from Reltio(publishing). Based on the Data Steward decision the state attribute contains relevant information to update DCR status.Flow diagramStepsEvent publisher publishes simple events to $env-internal-[onekeyvr|veeva]-change-requests-in: DCR_CHANGED("CHANGE_REQUEST_CHANGED") and DCR_REMOVED("CHANGE_REQUEST_REMOVED")Only the events without the ThirdPartyValidation flag are accepted, otherwise, the event is Rejected and the process is ended.Events are processed in the Stream and based on the targetChangeRequest.state attribute decision is madeIf the state is APPLIED or REJECTS, DCR is retrieved from the cache based on the changeRequestURIIf DCR exists in Cache The status in Reltio is updatedDCR entity attributesMappingVRStatusCLOSEDVRStatusDetailstate: APPLIED  ACCEPTEDstate: REJECTED → REJECTEDOtherwise, the events are rejected and the transaction is endedOtherwise, the events are rejected and the transition is ended.TriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-onekey-dcr-service:OneKeyResponseStreammdm-veeva-dcr-service:veevaResponseStreamprocess publisher full change request events in streamrealtime: events stream processing Dependent componentsComponentUsageOK DCR ServiceMain component with flow implementationVeeva DCR ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsHub StoreDCR and Entities Cache "
},
{
"title": "Submit Validation Request",
"pageID": "164469875",
"pageLink": "/display/GMDM/Submit+Validation+Request",
"content": "DescriptionThe process of submitting new validation requests to the OneKey service based on the Reltio change events aggregated in time windows. During this process, new DCRs are created in Reltio.Flow diagramStepsEvent publisher publishes simple events to $env-internal-onekeyvr-in including HCP_*, HCO_*, ENTITY_MATCHES_CHANGED Events are aggregated in a time window (recommended the window length 4 hours) and the last event is returned to the process after the window is closed.Simple events are enriched with the Entity data using HUB CacheThen, the following checks are executedcheck if at least one crosswalk create date is equal or above for a given source name and cut off date specified in configuration - section submitVR/crosswalkDecisionTablescheck if entity attribute values match specified in configurationcheck if there is no valid DCR created for the entity  check if the entity is activecheck if the OK crosswalk doesn't exist after the full entity retrieval from the HUB cachematch category is not 99GetMatches operation from Reltio returns 0 potential matchesIf any check is negative then the process is aborted.DCRRequest object is created in Mongo for the flow state recording and generation of the new unique DCR ID for validation request and data tracing.The entity is mapped to OK VR Request and it's submitted using API REST method POST /vr/submit.If the submission is successful then:DCRRequest.status is updated to SENT with OK request and response details DCR entity is created in Reltio and the relation between the processed entity and the DCR entityReltio source name (crosswalk.type): DCRReltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)DCR entity attributes:DCR entity attributesMappingDCRIDOK VR Reqeust Id (cegedimRequestEid)EntityURIthe processed entity URIVRStatus""OPEN"VRStatusDetail"SENT"CreatedByMDM HUBSentDatecurrent timeOtherwise FAILED status is recorded in DCRRequest with an OK error response.DCRRequest.status is updated to FAILED with OK request and exception response details DCR entity is created in Reltio and the relation between the processed entity and the DCR entityReltio source name (crosswalk.type): DCRReltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)DCR entity attributes:DCR entity attributesMappingDCRIDOK VR Reqeust Id (cegedimRequestEid)EntityURIthe processed entity URIVRStatus"CLOSED"VRStatusDetail"FAILED"CommentsONEKEY service failed [exception details]CreatedByMDM HUBSentDatecurrent timeTriggersTrigger actionComponentActionDefault timeIN Events incoming mdm-onekey-dcr-service:OneKeyStreamprocess publisher simple events in streamevents stream processing with 4h time window events aggregationOUT API requestone-key-client:OneKeyIntegrationService.submitValidationsubmit VR request to OneKeyinvokes API request for each accepted eventDependent componentsComponentUsageOK DCR ServiceMain component with flow implementationPublisherEvents publisher generates incoming eventsManagerReltio Adapter for getMatches and created operationsOneKey AdapterSubmits Validation RequestHub StoreDCR and Entities Cache MappingsReltio → OK mapping file: onkey_mappings.xlsxOK mandatory / required fields: VR - Business Fields Requirements(COMPANY).xlsxOneKey Documentation"
},
{
"title": "Trace Validation Request",
"pageID": "164469983",
"pageLink": "/display/GMDM/Trace+Validation+Request",
"content": "DescriptionThe process of tracing the VR changes based on the OneKey VR changes. During this process HUB, DCR Cache is triggered every <T> hour for SENT DCR's and check VR status using OneKey web service. After verification DCR is updated in Reltio or a new Workflow is started in Reltio for the Data Steward manual validation. Flow diagramStepsEvery N  hours OK VR requests with status SENT are queried in DCRRequests store.For each open requests, its status is checked it OK using REST API method /vr/traceThe first check is the VR.rsp.status attribute, checking if the status is SUCCESSNext, if the process status (VR.rsp.results.processStatus) is REQUEST_PENDING_OKE | REQUEST_PENDING_JMS | REQUEST_PROCESSED or OK data export date (VR.rsp.results.trace6CegedimOkcExportDate) is earlier than 24 hours then the processing of the request is postponed to the next checkexportDate or processStatus are optional and can be null.The process goes to the next step only if processStatus  is  REQUEST_RESPONDED | RESPONSE_SENTThe process is blocked to next check only if  trace6CegedimOkcExportDate is not null and is earlier than 24hIf the processStatus is validated and VR.rsp.results.responseStatus is VAS_NOT_FOUND | VAS_INCOHERENT_REQUEST | VAS_DUPLICATE_PROCESS then DCR is being closed with status REJECTEDDCR entity attributesMappingVRStatus""CLOSED"VRStatusDetail"REJECTED"ReceivedDatecurrent timeCommentsOK.responseCommentBefore these 2 next steps, the current Entity status is retrieved from HUB Cache. This is required to check if the entity was merged with OK entity. if responseStatus is VAS_FOUND | VAS_FOUND_BUT_INVALID and OK crosswalk exists in Reltio entity which value equals to OK validated id (individualEidValidated or workplaceEidValidated) then DCR is closed with status ACCEPTED.DCR entity attributesMappingVRStatus""CLOSED"VRStatusDetail"ACCEPTED"ReceivedDatecurrent timeCommentsOK.responseComment if responseStatus is VAS_FOUND | VAS_FOUND_BUT_INVALID but OK crosswalk doesn't exist in Reltio then Relio DCR Request is created and workflow task is triggered for Data Steward review. DCR status entity is updated with DS_ACTION_REQUIRED status. DCR entity attributesMappingVRStatus""OPEN"VRStatusDetail"DS_ACTION_REQUIRED "ReceivedDatecurrent timeCommentsOK.responseCommentGET /changeRequests operation is invoked to get a new change request ID and start a new workflowPOST /workflow/_initiate operation is invoked to init new Workflow in ReltioWorkflow attributesMappingchangeRequest.uriChangeRequest Reltio URIchangeRequest.changesEntity URIcommentindividualEidValidated or workplaceEidValidatedPOST /entities?changeRequestId=<id> - operation is invoked to update change request Entity container with DCR Status to Closed, this change is only visible in Reltio once DS accepts the DCR. Body attributesMappingattributes"DCRRequests": [ { "value": { "VRStatus": [ { "value": "CLOSED" } ] }, "refEntity": { "crosswalks": [ { "type": "configuration/sources/DCR", "value": "$requestId", "dataProvider": false, "contributorProvider": true }, { "type": "configuration/sources/DCR", "value": "$requestId_REF", "dataProvider": true, "contributorProvider": false } ] }, "refRelation": { "crosswalks": [ { "type": "configuration/sources/DCR", "value": "$requestId_REF" } ] } }]crosswalks"crosswalks": [ { "type": "configuration/sources/<source crosswalk>", "value": "<source value>", "dataProvider": false, "contributorProvider": true, "deleteDate": "" }, { "type": "configuration/sources/DCR", "value": "$requestId_CR", "dataProvider": true, "contributorProvider": false, "deleteDate": "" }]TriggersTrigger actionComponentActionDefault timeIN Timer (cron)mdm-onekey-dcr-service:TraceVRServicequery mongo to get all SENT DCR's related to OK_VR processevery <T> hourOUT API requestone-key-client:OneKeyIntegrationService.traceValidationtrace VR request to OneKeyinvokes API request for each DCRDependent componentsComponentUsageOK DCR ServiceMain component with flow implementationManagerReltio Adapter for GET /changeRequests and POST /workflow/_initiate operations OneKey AdapterTraceValidation RequestHub StoreDCR and Entities Cache "
},
{
"title": "PforceRx DCR flows",
"pageID": "209949183",
"pageLink": "/display/GMDM/PforceRx+DCR+flows",
"content": "DescriptionMDM HUB exposes Rest API to create and check the status of DCR. The process is responsible for creating DCRs in Reltio and starting Change Requests Workflow DCRs created in Reltio or creating the DCRs (submitVR operation) in ONEKEY. DCR requests can be routed to an external MDM HUB instance handling the requested country. The action is transparent to the caller. During this process, the communication to IQVIA OneKey VR API / Reltio API is established. The routing decision depends on the market, operation type, or changed profile attributes.Reltio API:  createEntity (with ChangeReqest) operation is executed to create a completely new entity in the new Change Request in Reltio. attributesUpdate (with ChageRequest) operation is executed after calculation of the specific changes on complex or simple attributes on existing entity - this also creates a new Change Request.  Start Workflow operation is requested at the end, this starts the Wrofklow for the DCR in Reltio so the change requests are started in the Reltio Inbox for Data Steward review.IQVIA API: SubmitVR operation is executed to create a new Validation Request. The TraceVR operation is executed to check the status of the VR in OneKey.All DCRs are saved in the dedicated collection in HUB Mongo DB, required to gather metadata and trace the changes for each DCR request. The DCR statuses are updated by consuming events generated by Reltio or periodic query action of open DCRs in OneKeyThe Data Steward can decide to route a DCR to IQVIA as well - some changes can be suggested by the DS using the "Suggest" operation in Reltio and "Send to Third Party Validation" button, the process "Data Steward OK Validation Request" is processing these changes and sends them to the OneKey service. The below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.API doc URL: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-dcr-spec-emea-dev/swagger-ui/index.htmlFlow diagramDCR Service High-Level ArchitectureDCR HUB Logical ArchitectureModel diagramFlows:Create DCRThe client call API Post /dcr method and pass the request in JSON format to MDM HUB DCR serviceThe request is validated against the following rules:mandatory fields are setreference object HCP,HCO are available in Reltioreferenced attributes like specialties, addresses are in the changed objectThe service evaluates the target system based on country, operation type (create, update), changed attributes. The process is controlled by the decision table stored in the config.The DCR is created in the target system through the APIThe result is stored in the registry. DCR information entity is created in Reltio for tracking.The status with created DCR object ids are returned in response to the ClientGet DCR statusThe client calls GET /dcr/_status methodThe DCR service queries DCR registry in Mongo and returns the status to the Client.There are processes updating dcr status in the registry:DCR change events are generated by Reltio when DCR is accepted or rejected by DS. Events are processed by the service.Reltio: process DCR Change EventsDCR change events are generated by Reltio when DCR is accepted or rejected by DS. Events are processed by the service.OneKey: process DCR Change EventsDCR change events are generated by the OneKey service when DCR is accepted or rejected by DS. Events are processed by the service.OneKey: generate DCR Change Events (traceVR)Every x configured hours the OneKey status method is queried to get status for open validation requests.Reltio: create DCR method - directdirect API method that creates DCR in Reltio (contains mapping and logic description)OneKey: create DCR method (submitVR) - directdirect API method that creates DCR in OneKey - executes the submitVR operation (contains mapping and logic description)TriggersDescribed in the separated sub-pages for each process.Dependent componentsDescribed in the separated sub-pages for each process."
},
{
"title": "Create DCR",
"pageID": "209949185",
"pageLink": "/display/GMDM/Create+DCR",
"content": "DescriptionThe process creates change requests received from PforceRx Client and sends the DCR to the specified target service - Reltio, OneKey or Veeva OpenData (VOD). DCR is created in the system and then processed by the data stewards. The status is asynchronously updated by the HUB processes, Client represents the DCR using a unique extDCRRequestId value. Using this value Client can check the status of the DCR (Get DCR status). Flow diagramSource: LucidSource: LucidDCR Service component perspective StepsClients execute the API POST /dcr requestKong receives requests and handles authenticationIf the authentication succeeds the request is forwarded to the dcr-service-2 component,DCR Service checks permissions to call this operation and the correctness of the request, then the flow is started and the following steps are executed:Parse and validate the dcr request. The validation logic checks the following: Check if the list of DCRRequests contains unique extDCRRequestId.Requests that are duplicate will be rejected with the error message - "Found duplicated request(s)"For each DCRRequest in the input list execute the following checks:Users can define the following number of entities in the Request:at least one entity has to be defined, otherwise, the request will be rejected with an error message - "No entities found in the request"single HCPsinge HCOsinge HCP with single HCOtwo HCOsCheck if the main reference objects exist in Reltio for update and delete actionHCP.refId or HCO.refId, user have to specify one of:CrosswalkTargetObjectId - then the entity is retrieved from Reltio using get entity by crosswalk operationEntityURITargetObjectId - then the entity is retrieved from Reltio using get entity by uri operationCOMPANYCustomerIdTargetObjectId - then the entity is retrieved from Reltio using search operation by the COMPANYGlobalCustomerIDAttributes validation:Simple attributes - like firstName/lastName e.t.cfor update action on the main object:if the input parameter is defined with an empty value - "" - this will result in the removal of the target attributeif the input parameter is defined with a non-empty value - this will result in the update of the target attributeNested attributes - like Specialties/Addresses e.t.cfor each attribute, the user has to define the refId to uniquely identify the attributeFor action "update" - if the refId is not found in the target object request will be rejected with a detailed error message For action "insert" - the refId is not required - new reference attribute will be added to the target objectChanges validation:If the validation detected 0 changes (during comparison of applying changes and the target entity) -  the request is rejected with an error message - "No changes detected"Evaluate dcr service (based on the decision table config)The following decision table is defined to choose the target serviceLIST OF the following combination of attributes:attributedescriptionuserName the user name that executes the requestsourceNamethe source name of the Main objectcountrythe county defined in the requestoperationTypethe operation type for the Main object{ insert, update, delete }affectedAttributesthe list of attributes that the user is changingaffectedObjects{ HCP, HCO, HCP_HCO }RESULT →  TargetType {Reltio, OneKey, Veeva}Each attribute in the configuration is optional. The decision table is making the validation based on the input request and the main object- the main object is HCP, if the HCP is empty then the decision table is checking HCO. The result of the decision table is the TargetType, the routing to the Reltio MDM system, OneKey or Veeva service. Execute target service (reltio/onekey/veeva)Reltio: create DCR method - directOneKey: create DCR method (submitVR) - directVeeva: create DCR method (storeVR)Create DCR in Reltio and save DCR in DCR Registry If the submission is successful then: DCR entity is created in Reltio and the relation between the processed entity and the DCR entityReltio source name (crosswalk.type): DCRReltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)for "create" and "delete" operation the Relation have to be created between objectsif this is just the "insert" operation the Relation will be created after the acceptance of the Change Request in Reltio - Reltio: process DCR Change EventsDCR entity attributes once sent to OneKeyDCR entity attributesMappingDCRIDextDCRRequestIdEntityURIthe processed entity URIVRStatus"OPEN"VRStatusDetail"SENT_TO_OK"CreatedByMDM HUBSentDatecurrent timeCreateDatecurrent timeCloseDateif REJECTED | ACCEPTED -> current timedcrTypeevaluate based on config:dcrTypeRules: - type: CR0 size: 1 action: insert entity: com.COMPANY.mdm.api.dcr2.HCPDCR entity attributes once sent to VeevaDCR entity attributesMappingDCRIDextDCRRequestIdEntityURIthe processed entity URIVRStatus"OPEN"VRStatusDetail"SENT_TO_VEEVA"CreatedByMDM HUBSentDatecurrent timeCreateDatecurrent timeCloseDateif REJECTED | ACCEPTED -> current timedcrTypeevaluate based on config:dcrTypeRules: - type: CR0 size: 1 action: insert entity: com.COMPANY.mdm.api.dcr2.HCPDCR entity attributes once sent to Reltio → action is passed to DS and workflow is started. DCR entity attributesMappingDCRIDextDCRRequestIdEntityURIthe processed entity URIVRStatus"OPEN"VRStatusDetail"DS_ACTION_REQUIRED "CreatedByMDM HUBSentDatecurrent timeCreateDatecurrent timeCloseDateif REJECTED | ACCEPTED -> current timedcrTypeevaluate based on config:dcrTypeRules: - type: CR0 size: 1 action: insert entity: com.COMPANY.mdm.api.dcr2.HCPMongo Update: DCRRequest.status is updated to SENT with OneKey or Veeva request and response details or DS_ACTION_REQURIED with all Reltio detailsOtherwise FAILED status is recorded in DCRRequest with a detailed error message.Mongo Update:  DCRRequest.status is updated to FAILED with all required attributes, request, and exception response details Initialize Workflow in Reltio (only requests that TargetType is Reltio)POST /workflow/_initiate operation is invoked to init new Workflow in ReltioWorkflow attributesMappingchangeRequest.uriChangeRequest Reltio URIchangeRequest.changesEntity URIThen Auto close logic is invoked to evaluate whether DCR request meets conditions to be auto accepted or auto rejected. Logic is based on decision table PreCloseConfig. If DCRRequest.country is contained in PreCloseConfig.acceptCountries or PreCloseConfig.rejectCountries then DCR is accepted or rejected respectively. return DCRResponse to Client - During the flow, DCRRespone may be returned to Client with the specific errorCode or requestStatus. The description for all response codes is presented on this page: Get DCR statusTriggersTrigger actionComponentActionDefault timeREST callDCR Service: POST /dcrcreate DCRs in the Reltio, OneKey or Veeva systemAPI synchronous requests - realtimeDependent componentsComponentUsageDCR ServiceMain component with flow implementationOK DCR ServiceOneKey Adapter - API operationsVeeva DCR ServiceVeeva Adapter - API operations and S3/SFTP communication ManagerReltio Adapter - API operationsHub StoreDCR and Entities Cache "
},
{
"title": "DCR state change",
"pageID": "218438617",
"pageLink": "/display/GMDM/DCR+state+change",
"content": "DescriptionThe following diagram represents the DCR state changes. DCR object stat is saved in HUB and in Reltio DCR entity object. The state of the DCR is changed based on the Reltio/IQVIA/Veeva Data Steward action.Flow diagramStepsDCR is created (OPEN)  - Create DCRDCR is sent to Reltio, OneKey or VeevaWhen sent to ReltioPre Close logic is invoked to auto accept (PRE_ACCEPT) or auto reject (PRE_REJECT) DCRReltio Data Steward process the DCR - Reltio: process DCR Change EventsOneKey Data Steward process the DCR - OneKey: process DCR Change EventsVeeva Data Steward process the DCR - Veeva: process DCR Change EventsData Steward DCR status change perspectiveTransaction LogThere are the following main assumptions regarding the transaction log in DCR service: Main transaction The user sends to the DCR service list of the DCR Requests and receives the list of the DCR ResponsesTransaction service generates the transaction ID for the input request - this is used as the correlation ID for each separated DCR Request in the listTransaction service save:METADATAmain transaction IDuserNameextDCRRequestIds (list of all) BODYthe DCR Requests list and the DCR Response ListState change transactionDCR object state may change depending on the DS decision, for each state change (represented as a green box in the above diagram) the transaction is saved with the following attributes:Transaction METADATAmain transaction IDextDCRRequestIddcrRequestIdReltio:VRStatusVRStatusDetailHUB:DCRRequestStatusDetailsoptionally:errorMessageerrorCodeTransaction BODY:Input EventLog appenders:Kafka Transaction appender - saves whole events(metadata+body) to Kafka - data presented in the Kibana Dashboard <link TODO>Simple Transaction logger - saves the transactions details to the file in the following format:{ID}    {extDCRRequestId}   {dcrRequestId}   {VRStatus}   {VRStatusDetail}   {DCRRequestStatusDetails}   {errorCode}   {errorMessage}TriggersTrigger actionComponentActionDefault timeREST callDCR Service: POST /dcrcreate DCRs in the Reltio system or in OneKeyAPI synchronous requests - realtimeIN Events incoming dcr-service-2:DCRReltioResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing IN Events incoming dcr-service-2:DCROneKeyResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing IN Events incoming dcr-service-2:DCRVeevaResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing Dependent componentsComponentUsageDCR ServiceMain component with flow implementationOK DCR ServiceOneKey Adapter  - API operationsVeeva DCR ServiceVeeva Adapter  - API operationsManagerReltio Adapter  - API operationsHub StoreDCR and Entities Cache "
},
{
"title": "Get DCR status",
"pageID": "209949187",
"pageLink": "/display/GMDM/Get+DCR+status",
"content": "DescriptionThe client creates DCRs in Reltio, OneKey or Veeva OpenData using the Create DCR operation. The status is then asynchronously updated in the DCR Registry. The operation retrieves the current status of the DCRs that the updated date is between 'updateFrom' and 'updateTo' input parameters. PforceRx first asks what DCRs have been changed since last time they checked (usually 24h) and then iterate for each DCR they get detailed info.Flow diagram,Source: LucidDependent flows:The DCRRegistry is enriched by the DCR events that are generated by Reltio - the flow description is here - Reltio: process DCR Change EventsThe DCRRegistry is enriched by the DCR events generated in OneKey DCR service component - after submitVR operation is invoked to ONEKEY, each DCR is traced asynchronously in this process - OneKey: process DCR Change EventsThe DCRRegistry is enriched by the DCR events generated in Veeva OpenData DCR service component - after submitVR operation is invoked to VEEVA, each DCR is traced asynchronously in this process - Veeva: process DCR Change EventsStepsStatusThere are the following request statuses that users may receive during Create DCR operation or during checking the updated status using GET /dcr/_status operation described below:RequestStatusDCRStatus Internal Cache statusDescriptionREQUEST_ACCEPTEDCREATEDSENT_TO_OKDCR was sent to the ONEKEY system for validation and pending the processing by Data Steward in the systemREQUEST_ACCEPTEDCREATEDSENT_TO_VEEVADCR was sent to the VEEVA system for validation and pending the processing by Data Steward in the systemREQUEST_ACCEPTEDCREATEDDS_ACTION_REQUIREDDCR is pending Data Steward validation in Reltio, waiting for approval or rejectionREQUEST_ACCEPTEDCREATEDOK_NOT_FOUNDUsed when ONEKEY profile was not found after X retriesREQUEST_ACCEPTEDCREATEDVEEVA_NOT_FOUNDUsed when VEEVA profile was not found after X retriesREQUEST_ACCEPTEDCREATEDWAITING_FOR_ETL_DATA_LOADUsed when waiting for actual data profile load from 3rd Party to appear in ReltioREQUEST_ACCEPTEDACCEPTEDACCEPTEDData Steward accepted the DCR, changes were appliedREQUEST_ACCEPTEDACCEPTEDPRE_ACCEPTEDPreClose logic was invoked and automatically accepted DCR according to decision table in PreCloseConfigREQUEST_REJECTEDREJECTED REJECTEDData Steward rejected the changes presented in the Change RequestREQUEST_REJECTEDREJECTED PRE_REJECTEDPreClose logic was invoked and automatically rejected DCR according to decision table in PreCloseConfigREQUEST_FAILED-FAILEDDCR requests failed due to: validation error/ unexpected error e.t.d - details in the errorCode and errorMessageError codes:There are the following classes of exception that users may receive during Create DCR operation:ClasserrorCodeDescriptionHTTP code1DUPLICATE_REQUESTrequest rejected - extDCRRequestId  is registered - this is a duplicate request4032NO_CHANGES_DETECTEDentities are the same (request is the same) - no changes4003VALIDATION_ERRORref object does not exist (not able to find HCP/HCO target object4043VALIDATION_ERRORref attribute does not exist - not able to find nested attribute in the target object4003VALIDATION_ERRORwrong number of HCP/HCO entities in the input request400Clients execute the API GET/dcr/_status requestKong receives requests and handles authenticationIf the authentication succeeds the request is forwarded to the dcr-service-2 component,DCR Service checks permissions to call this operation and the correctness of the request, then the flow is started and the following steps are executedQuery on mongo is executed to get all DCRs matching input parameters:updateFrom (date-time) - DCR last update from - DCRRequestDetails.status.changeDateupdateTo (date-time) - DCR last update to - DCRRequestDetails.status.changeDatelimit (int) the maximum number of results returned through API - the recommended value is 25. The max value for a single request is 50.offset(int) - result offset - the parameter used to query through results that exceeded the limit. Resulted values are aggregated and returned to the Client.The client receives the List<DCRResposne> body.TriggersTrigger actionComponentActionDefault timeREST callDCR Service: GET/dcr/_statusget status of created DCRs. Limit the results using query parameters like dates and offsetAPI synchronous requests - realtimeDependent componentsComponentUsageDCR ServiceMain component with flow implementationHub StoreDCR and Entities Cache "
},
{
"title": "OneKey: create DCR method (submitVR) - direct",
"pageID": "209949294",
"pageLink": "/display/GMDM/OneKey%3A+create+DCR+method+%28submitVR%29+-+direct",
"content": "DescriptionRest API method exposed in the OK DCR Service component responsible for submitting the VR to OneKeyFlow diagramStepsReceive the API requestValidate - check if the onekey crosswalk exists once there is an update on the profile, otherwise reject the requestThe DCR is mapped to OK VR Request and it's submitted using API REST method POST /vr/submit. (mapping described below)If the submission is successful then:DCRRequesti updated to SENT_TO_OK with OK request and response details. DCRRegistryONEKEY collection in saved for tracing purposes. The process that reads and check ONEKEY VRs is described here: OneKey: generate DCR Change Events (traceVR)Otherwise FAILED status is recorded and the response is returned with an OK error responseMappingVR - Business Fields Requirements_UK.xlsx - file that contains VR UK requirements and mapping to IQVIA modelHUBONEKEYattributesattributescodesmandatoryattributesvaluesHCOYentityTypeWORKPLACEYvalidation.clientRequestIdHUB_GENERATED_IDYvalidation.processQYvalidation.requestDate1970-01-01T00:00ZYvalidation.callDate1970-01-01T00:00ZattributesYvalidation.requestProcessIextDCRCommentvalidation.requestCommentcountryYisoCod2reference EntitycrosswalkONEKEYworkplace.workplaceEidnameworkplace.usualNameworkplace.officialNameotherHCOAffiliationsparentUsualNameworkplace.parentUsualNamesubTypeCodeCOTFacilityType(TET.W.*)workplace.typeCodetypeCodeno value in PFORCERXHCOSubType(LEX.W.*)workplace.activityLocationCodeaddressessourceAddressIdN/AaddressTypeN/AaddressLine1address.longLabeladdressLine2address.longLabel2addressLine3N/AstateProvinceAddressState(DPT.W.*)address.countyCodecityYaddress.cityzipaddress.longPostalCodecountryYaddress.countryrankget address with rank=1 emailstypeN/Aemailworkplace.emailrankget email with rank=1 otherHCOAffiliationstypeN/Arankget affiliation with rank=1 reference EntityotherHCOAffiliations reference entity onekeyID ONEKEYworkplace.parentWorkplaceEidphonestypecontains FAXnumberworkplace.telephonerankget phone with rank=1 typenot contains FAXnumberworkplace.faxrankget phone with rank=1 HCPYentityTypeACTIVITYYvalidation.clientRequestIdHUB_GENERATED_IDYvalidation.processQYvalidation.requestDate1970-01-01T00:00ZYvalidation.callDate1970-01-01T00:00ZattributesYvalidation.requestProcessIextDCRCommentvalidation.requestCommentcountryYisoCod2reference EntitycrosswalkONEKEYindividual.individualEidfirstNameindividual.firstNamelastNameYindividual.lastNamemiddleNameindividual.middleNametypeCodeN/AsubTypeCodeHCPSubTypeCode(TYP..*)individual.typeCodetitleHCPTitle(TIT.*)individual.titleCodeprefixHCPPrefix(APP.*)individual.prefixNameCodesuffixN/AgenderGender(.*)individual.genderCodespecialtiestypeCodeHCPSpecialty(SP.W.*)individual.speciality1typeN/Arankget speciality with rank=1 typeCodeHCPSpecialty(SP.W.*)individual.speciality2typeN/Arankget speciality with rank=2 typeCodeHCPSpecialty(SP.W.*)individual.speciality3typeN/Arankget speciality with rank=3 addressessourceAddressIdN/AaddressTypeN/AaddressLine1address.longLabeladdressLine2address.longLabel2addressLine3N/AstateProvinceAddressState(DPT.W.*)address.countyCodecityYaddress.cityzipaddress.longPostalCodecountryYaddress.countryrankget address with rank=1 identifierstypeN/AidN/AphonestypeN/Anumberindividual.mobilePhonerankget phone with rank=1 emailstypeN/Aemailindividual.emailrankget phone with rank=1 contactAffiliationsno value in PFORCERXtypeRoleType(TIH.W.*)activity.roleprimaryN/Arankget affiliation with rank=1 contactAffiliations reference EntitycrosswalksONEKEYworkplace.workplaceEidHCP & HCOYentityTypeACTIVITYFor HCP full mapping check the HCP section aboveYvalidation.clientRequestIdHUB_GENERATED_IDFor HCO full mapping check the HCO section aboveYvalidation.processQYvalidation.requestDate1970-01-01T00:00ZYvalidation.callDate1970-01-01T00:00ZattributesYvalidation.requestProcessIextDCRCommentvalidation.requestCommentcountryYisoCod2addressesIf the HCO address exists map to ONEKEY addressaddress (mapping HCO)elseIf the HCP address exists map to ONEKEY addressaddress (mapping HCP)contactAffiliationsno value in PFORCERXtypeRoleType(TIH.W.*)activity.roleprimaryN/Arankget affiliation with rank=1 TriggersTrigger actionComponentActionDefault timeREST callDCR Service: POST /dcrcreate DCRs in the ONEKEYAPI synchronous requests - realtimeDependent componentsComponentUsageDCR Service 2Main component with flow implementationHub StoreDCR and Entities Cache "
},
{
"title": "OneKey: generate DCR Change Events (traceVR)",
"pageID": "209950500",
"pageLink": "/pages/viewpage.action?pageId=209950500",
"content": "DescriptionThis process is triggered after the DCR was routed to Onekey based on the decision table configuration. The process of tracing the VR changes is based on the OneKey VR changes. During this process HUB, DCR Cache is triggered every <T> hour for SENT DCR's and check VR status using OneKey web service. After verification, the DCR Change event is generated. The DCR event is processed in the OneKey: process DCR Change Events and the DCR is updated in Reltio with Accepted or Rejected status.Flow diagramStepsEvery N  hours OK VR requests with status SENT are queried in DCRRegistryONEKEY store.For each open requests, its status is checked it OK using REST API method /vr/traceThe first check is the VR.rsp.status attribute, checking if the status is SUCCESSNext, if the process status (VR.rsp.results.processStatus) is REQUEST_PENDING_OKE | REQUEST_PENDING_JMS | REQUEST_PROCESSED or OK data export date (VR.rsp.results.trace6CegedimOkcExportDate) is earlier than 24 hours then the processing of the request is postponed to the next checkexportDate or processStatus are optional and can be null.The process goes to the next step only if processStatus  is  REQUEST_RESPONDED | RESPONSE_SENTThe process is blocked to next check only if  trace6CegedimOkcExportDate is not null and is earlier than 24hIf the processStatus is validated and VR.rsp.results.responseStatus is VAS_NOT_FOUND | VAS_INCOHERENT_REQUEST | VAS_DUPLICATE_PROCESS then OneKeyDCREvent is being generated with status REJECTEDOneKeyChangeRequest attributesMappingvrStatus"CLOSED"vrStatusDetail"REJECTED"traceResponseReceivedDatecurrent timeoneKeyCommentOK.responseCommentNext. if responseStatus is VAS_FOUND | VAS_FOUND_BUT_INVALID then OneKeyDCREvent is being generated with status ACCEPTED. ( now the new ONEKEY profile will be loaded to Reltio using ETL data load. The OneKey: process DCR Change Events is processing this events ad checks the Reltio if the ONEKEY is created and COMPANYCustomerGlobalId is assigned, this process will wait until ONEKEY is in Reltio so the client received the ACCEPTED DCR only after this condition is met) DCR entity attributesMappingvrStatus"CLOSED"vrStatusDetail"ACCEPTED"traceResponseReceivedDatecurrent timeoneKeyCommentOK.responseComment \\nONEKEY ID = individualEidValidated or workplaceEidValidatedevents are published to the $env-internal-onekey-dcr-change-events-in topicEvent Modeldata class OneKeyDCREvent(val eventType: String? = null, val eventTime: Long? = null, val eventPublishingTime: Long? = null, val countryCode: String? = null, val dcrId: String? = null, val targetChangeRequest: OneKeyChangeRequest,)data class OneKeyChangeRequest( val vrStatus : String? = null, val vrStatusDetail : String? = null, val oneKeyComment : String? = null, val individualEidValidated : String? = null, val workplaceEidValidated : String? = null, val vrTraceRequest : String? = null, val vrTraceResponse : String? = null,)TriggersTrigger actionComponentActionDefault timeIN Timer (cron)dcr-service:TraceVRServicequery mongo to get all SENT DCR's related to the PFORCERX processevery <T> hourOUT Eventsdcr-service:TraceVRServicegenerate the OneKeyDCREventevery <T> hourDependent componentsComponentUsageDCR ServiceMain component with flow implementationHub StoreDCR and Entities Cache "
},
{
"title": "OneKey: process DCR Change Events",
"pageID": "209949303",
"pageLink": "/display/GMDM/OneKey%3A+process+DCR+Change+Events",
"content": "\n\n\n\nDescriptionThe process updates the DCR's based on the Change Request events received from [ONEKEY|VOD] (after trace VR method result). Based on the [IQVIA|VEEVA] Data Steward decision the state attribute contains relevant information to update DCR status. During this process also the comments created by IQVIA DS are retrieved and the relationship (optional step) between the DCR object and the newly created entity is created. DCR status is accepted only after the [ONEKEY|VOD] profile is created in Reltio, only then the Client will receive the ACCEPTED status. The process is checking Reltio with <T> delay and retries if the ETL load is still in progress waiting for [ONEKEY|VOD] profile. Flow diagram\n\n\n\n\n\nOneKey variant\n\n\n\nVeeva variant: \n\n\n\n\n\nStepsOneKey: generate DCR Change Events (traceVR) publishes simple events to $env-internal-onekey-dcr-change-events-in: DCR_CHANGEDVeeva specific: Veeva: generate DCR Change Events (traceVR) publishes simple events to $env-internal-veeva-dcr-change-events-in: DCR_CHANGEDEvents are aggregated in a time window (recommended the window length 24 hours) and the last event is returned to the process after the window is closed.Events are processed in the Stream and based on the OneKeyDCREvent.OneKeyChangeRequest.vrStatus | VeevaDCREvent.VeevaChangeRequestDetails.vrStatus attribute decision is madeDCR is retrieved from the cache based on the _id of the DCRIf the event state is ACCEPTEDGet Reltio entity COMPANYCustomerID by [ONEKEY|VOD] crosswalkIf such crosswalk entity exists in Reltio:COMPANYGlobalCustomerId is saved in Registry and will be returned to the Client During the process, the optional check is triggered - create the relation between the DCR object and newly created entitiesif DCRRegistry contain an empty list of entityUris, or some of the newly created entity is not present in the list, the Relation between this object and the DCR has to be createdDCR entity is updated in Reltio and the relation between the processed entity and the DCR entityReltio source name (crosswalk. type): DCRReltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)Newly created entities uris should be retrieved by the individualEidValidated or workplaceEidValidated (it may be both) attributes from the events that represent the HCP or HCO crosswalks.The status in Reltio and in Mongo is updatedDCR entity attributesMapping for OneKeyMapping for VeevaVRStatusCLOSEDVRStatusDetailstate: ACCEPTEDCommentsONEKEY comments ({VR.rsp.responseComments})ONEKEY ID = individualEidValidated or workplaceEidValidatedVEEVA comments = VR.rsp.responseCommentsVEEVA ID = entityUrisCOMPANYGlobalCustomerIdThis is required in ACCEPTED status If the [ONEKEY|VOD] does not exist in ReltioRegenerate the Event with a new timestamp to the input topic so this will be processed in the next <T> hoursUpdate the Reltio DCR statusDCR entity attributesMappingVRStatusOPENVRStatusDetailACCEPTEDupdate the Mongo status to the OK_NOT_FOUND | VEEVA_NOT_FOUND and increase the "retryCounter" attributeIf the event state is REJECTEDIf a Reltio DS has already seen this request, REJECT the DCR and end the flow (if the initial target type is Reltio)The status in Reltio and in Mongo is updatedDCR entity attributesMappingVRStatusCLOSEDVRStatusDetailstate: REJECTEDComments[ONEKEY|VOD] comments ({VR.rsp.responseComments})If this is based on the routing table and it was never sent to the Reltio DS, then create the DCR workflow and send this to the Reltio DS. Add the information comment that this was Rejected by the OneKey, so now Reltio DS has to decide if this should be REJECTED or APPLIED in Reltio. Add the comment that this is not possible to execute the sendTo3PartyValidation button in this case. Steps:Check if the initial target type is [ONEKEY|VOD]Use the DCR Request that was initially received from PforceRx and is a Domain Model request (after validation) Send the DCR to Reltio the service returns the following response:ACCEPTED (change request accepted by Reltio)update the status to DS_ACTION_REQUIERED and in the comment add the following: "This DCR was REJECTED by the [ONEKEY|VOD] Data Steward with the following comment: <[ONEKEY|VOD] reject comment>. Please review this DCR in Reltio and APPLY or REJECT. It is not possible to execute the sendTo3PartyValidation button in this case"initialize new Workflow in Reltio with the comment.save data in the DCR entity status in Reltio and update Mongo DCR Registry with workflow ID and other attributes that were used in this Flow.REJECTED  (failure or error response from Reltio)CLOSE the DCR with the information that DCR was REJECTED by the [ONEKEY|VOD] and Reltio also REJECTED the DCR. Add the error message from both systems in the comment. TriggersTrigger actionComponentActionDefault timeIN Events incoming dcr-service-2:DCROneKeyResponseStreamdcr-service-2:DCRVeevaResponseStream ($env-internal-veeva-dcr-change-events-in)process publisher full change request events in the streamrealtime: events stream processing Dependent componentsComponentUsageDCR Service 2Main component with flow implementationManagerReltio Adapter  - API operationsPublisherEvents publisher generates incoming eventsHub StoreDCR and Entities Cache \n\n\n"
},
{
"title": "Reltio: create DCR method - direct",
"pageID": "209949292",
"pageLink": "/display/GMDM/Reltio%3A+create+DCR+method+-+direct",
"content": "DescriptionRest API method exposed in the Manager component responsible for submitting the Change Request to ReltioFlow diagramStepsReceive the DCR request generated by DCR Service 2 componentDepending on the Action execute the method in the Manager component:insert - Execute standard Create/Update HCP/HCO/MCO operation with additional changeRequest.id parameterupdate - Execute Update Attributes operation with additional changeRequest.id parameterthe combination of IGNORE_ATTRIBUTE & INSERT_ATTRIBUTE once updating existing parameter in Reltiothe INSERT_ATTRIBUTE once adding new attribute to Reltiodelete - Execute Update Attribute operation with additional changeRequest.id parameterthe UPDATE_END_DATE on the entity to inactivate this profileBased on the Reltio response the DCR Response is returned:REQUEST_ACCEPTED - Reltio processed the request successfully REQUEST_FAILED - Reltio returned the exception, Client will receive the detailed description in the errorMessageTriggersTrigger actionComponentActionDefault timeREST callDCR Service: POST /dcr2Create change Requests in ReltioAPI synchronous requests - realtimeDependent componentsComponentUsageDCR ServiceMain component with flow implementationHub StoreDCR and Entities Cache "
},
{
"title": "Reltio: process DCR Change Events",
"pageID": "209949300",
"pageLink": "/display/GMDM/Reltio%3A+process+DCR+Change+Events",
"content": "DescriptionThe process updates the DCR's based on the Change Request events received from Reltio(publishing). Based on the Data Steward decision the state attribute contains relevant information to update DCR status. During this process also the comments created by DS are retrieved and the relationship (optional step) between the DCR object and the newly created entity is created.Flow diagramStepsEvent publisher publishes simple events to $env-internal-reltio-dcr-change-events-in: DCR_CHANGED("CHANGE_REQUEST_CHANGED") and DCR_REMOVED("CHANGE_REQUEST_REMOVED")When the events do not contain the ThirdPartyValidation flag it means that DS APPLIED or REJECTED the DCR, the following logic is appliedEvents are processed in the Stream and based on the targetChangeRequest.state attribute decision is madeIf the state is APPLIED or REJECTS, DCR is retrieved from the cache based on the changeRequestURIIf DCR exists in Cache The status in Reltio is updatedDCR entity attributesMappingVRStatusCLOSEDVRStatusDetailstate: APPLIED  ACCEPTEDstate: REJECTED → REJECTEDOtherwise, the events are rejected and the transaction is endedThe COMPANYCustomerGlobalId is retrieved for newly created entities in Reltio based on the main entity URI.During the process, the optional check is triggered - create the relation between the DCR object and newly created entitiesif DCRRegistry contain an empty list of entityUris, or some of the newly created entity is not present in the list, the Relation between this object and the DCR has to be createdDCR entity is updated in Reltio and the relation between the processed entity and the DCR entityReltio source name (crosswalk. type): DCRReltio relation type: HCPtoDCR or HCOtoDCR (depending on the object type)The comments added by the DataSteward during the processing of the Change request is retrieved using the following operation:GET /tasks?objectURI=entities/<id>The processInstanceComments is retrieved from the response and added to DCRRegistry.changeRequestComment Otherwise, when the events contain the ThirdPartyValidation flag it means that DS decided to send the DCR to IQVIA or VEEVA for the validation, the following logic is applied:If the current targetType is ONEKEY | VEEVAREJECT the DCR and add the comment on the DCR in Retlio that "DCR was already processed by [ONEKEY|VEEVA] Data Stewards, REJECT because it is not allowed to send this DCR one more time to [IQVIA|VEEVA]"If the current targetType is Reltio, it means that we can send this DCR to [IQVIA|VEEVA] for validation Use the DCR Request that was initially received from PforceRx and is a Domain Model request (after validation)Execute the POST /dcr method in [ONEKEY|VEEVA] DCR Service, the service returns the following response:ACCEPTED - update the status to [SENT_TO_OK|SENT_TO_VEEVA]REJECTED - it means that some unexpected exception occurred in [ONEKEY|VEEVA], or request was rejected by [ONEKEY|VEEVA], or the ONEKEY crosswalk does not exist in Reltio, and [ONEKEY|VEEVA]service rejected this requestVeeva specific: When VOD crosswalk does not exist in Reltio, current version of profile is being sent to Veeva for validation independent from initial changes which where incorporated within DCRTriggersTrigger actionComponentActionDefault timeIN Events incoming dcr-service-2:DCRReltioResponseStreamprocess publisher full change request events in the streamrealtime: events stream processing Dependent componentsComponentUsageDCR ServiceDCR Service 2Main component with flow implementationManagerReltio Adapter  - API operationsPublisherEvents publisher generates incoming eventsHub StoreDCR and Entities Cache "
},
{
"title": "Reltio: Profiles created by DCR",
"pageID": "510266969",
"pageLink": "/display/GMDM/Reltio%3A+Profiles+created+by+DCR",
"content": "DCR typeApproval/Reject Record visibility in MDMCrosswalk TypeCrosswalk ValueSourceDCR create for HCP/HCOApproved by OneKey/VODHCP/HCO created in MDMONEKEY|VODonekey id ONEKEY|VODApproved by DSRHCP/HCO created in MDMSystem source name from DCR (KOL_OneView, PforceRx, etc)DCR IDSystem source name from DCR (KOL_OneView, PforceRx, etc)DCR edit for HCP/HCOApproved by OneKey/VODHCP/HCO requested attribute updated in MDMONEKEY|VODONEKEY|VODApproved by DSRHCP/HCO requested attribute updated in MDMReltioentity uriReltioDCR edit for HCPaddress/HCO addressApproved by OneKey/VODNew address created in MDM, existing address marked as inactiveONEKEY|VODONEKEY|VODApproved by DSRNew address created in MDM, existing address marked as inactiveReltioentity uriReltio"
},
{
"title": "Veeva DCR flows",
"pageID": "379332475",
"pageLink": "/display/GMDM/Veeva+DCR+flows",
"content": "DescriptionThe process is responsible for creating DCRs which are stored (Store VR) to be further transferred and processed by Veeva. Changes can be suggested by the DS using "Suggest" operation in Reltio and "Send to Third Party Validation" button. All DCRs are saved in the dedicated collection in HUB Mongo DB, required to gather metadata and trace the changes for each DCR request. During this process, the communication to Veeva Opendata is established via S3/SFTP communication. SubmitVR operation is executed to create a new ZIP files with DCR requests spread across multiple CSV files. The TraceVR operation is executed to check if Veeva responded to initial DCR Requests via ZIP file placed Inbound S3 dir. The process is divided into 3 sections:Create DCR request - VeevaSubmit DCR Request - VeevaTrace Validation Request - VeevaThe below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.Business process diagram for R1 phaseFlow diagramStepsCreateVRProcess of saving DCR requests in Mongo Cache after being triggered by DCR Service 2.DCR request information is translated to Veeva's model and stored in dedicated collection for Veeva DCRs.SubmitVRThe process of submitting VR stored in Mongo Cache to Veeva's SFTP via S3 bucket. The process aggregates events stored in Mongo Cache since last submit.New ZIP is created with CSV files containing DCR request for Veeva. ZIP is placed in outbound dir in S3 bucket which is further synchronized to Veeva's SFTP. Each DCR is updated with ZIP file name which was used to transfer request to Veeva.TraceVRThe process of tracing VR is triggered each <T> hours by Spring Scheduler.Inbound S3 bucket is searched for ZIP files with CSVs containing DCR responses from Veeva. There are multiple dirs in S3 buckets, each for specific group of countries (currently CN and APAC).Parts of DCR responses are spread across multiple files. Combined information is being processed.Finally information about DCR is updated in Mongo Cache and events are produced to dedicated topic for DCR Service 2 for further processing.TriggersDCR service 2 is being triggered via /dcr API calls which are triggered by Data Stewards actions (R1 phase) → "Suggests 3rd party validation" which pushes DCR from Reltio to HUB.Dependent componentsDescribed in the separated sub-pages for each process.Design document for HUB development Design → VeevaOpenData-implementation.docxReltio HUB-VOD mapping → VeevaOpenDataAPACDataDictionary.xlsxVOD model description (v4) → Veeva_OpenData_APAC_Data_Dictionary v4.xlsx"
},
{
"title": "Create DCR request - Veeva",
"pageID": "386814533",
"pageLink": "/display/GMDM/Create+DCR+request+-+Veeva",
"content": "DescriptionThe process of creating new DCR requests to the Veeva OpenData. During this process, new DCRs are created in DCRregistryVeeva mongo collection.Flow diagramStepsService is called by Rest APIInput request is validated. If request is invalid - return response with status REJECTEDTransform input request to Veeva DCR modeltranslate lookup codes to Veeva source codesfill the Veeva DCR model with input request valuesSave DCR request to DCRRegistryVeeva mongo collection with status NEWMappingsDCR domain model→ VOD mapping file: VeevaOpenDataAPACDataDictionary-mmor-mapping.xlsxVeeva integration guide"
},
{
"title": "Submit DCR Request - Veeva",
"pageID": "379333348",
"pageLink": "/display/GMDM/Submit+DCR+Request+-+Veeva",
"content": "DescriptionThe process of submitting new validation requests to the Veeva OpenData service via VeevaAdapter (communication with S3/SFTP) based on DCRRegistryVeeva mongo collection . During this process, new DCRs are created in VOD system.Flow diagramStepsVeeva DCR service flow:Every N hours Veeva DCR requests with status NEW are queried in DCRRegistryVeeva store.DCR are group by countryFor each country:merge Veeva DCR requests - create one zip file for each countryupload zip file to S3 locationupdate DCR status to SENT if upload status is successfulDCR entity attributesMappingDCRIDVeeva VR Request IdVRStatus"OPEN"VRStatusDetail"SENT"CreatedByMDM HUBSentDatecurrent timeSFTP integration service flow:Every N  hours grab all zip files from S3 locationsUpload files to corresponding SFTP serverTriggersTrigger actionComponentActionDefault timeSpring schedulermdm-veeva-dcr-service:VeevaDCRRequestSenderprepare ZIP files for VOD systemCalled every specified intervalDependent componentsComponentUsageVeeva adapterUpload DCR request to s3 location"
},
{
"title": "Trace Validation Request - Veeva",
"pageID": "379333358",
"pageLink": "/display/GMDM/Trace+Validation+Request+-+Veeva",
"content": "DescriptionThe process of tracing the VR changes based on the Veeva VR changes. During this process HUB, DCRRegistryVeeva Cache is triggered every <T> hour for SENT DCR's and check VR status using Veeva Adapter (s3/SFTP integration). After verification DCR event is sent to DCR Service 2  Veeva response stream.Flow diagramStepsEvery N get all Veeva DCR responses using Veeva AdapterFor each response:check if status is terminal - (CHANGE_ACCEPTED, CHANGE_PARTIAL, CHANGE_REJECTED, CHANGE_CANCELLED)if not - go to next responsequery DCRregistryVeeva mongo collection for DCR with given key and SENT statusget Veeva ID (vid__v) from response filegenerate Veeva DCR change eventupdate DCR status in DCRRegistryVeeva mongo collectionresolution is CHANGE_ACCEPTED, CHANGE_PARTIALDCR entity attributesMappingVRStatus"CLOSED"VRStatusDetail"ACCEPTED"ResponseTimeveeva response completed dateCommentsveeva response resolution notesresolution is CHANGE_REJECTED, CHANGE_CANCELLEDDCR entity attributesMappingVRStatus"CLOSED"VRStatusDetail"REJECTED"ResponseTimeveeva response completed dateCommentsveeva response resolution notesTriggersTrigger actionComponentActionDefault timeIN Spring schedulermdm-veeva-dcr-service:VeevaDCRRequestTracestart trace validation request processevery <T> hourOUT Kafka topicmdm-dcr-service-2:VeevaResponseStreamupdate DCR status in Reltio, create relationsinvokes Kafka producer for each veeva DCR responseDependent componentsComponentUsageDCR Service 2Process response event"
},
{
"title": "Veeva: create DCR method (storeVR)",
"pageID": "379332642",
"pageLink": "/pages/viewpage.action?pageId=379332642",
"content": "DescriptionRest API method exposed in the Veeva DCR Service component responsible for creating new DCR requests specific to Veeva OpenData (VOD) and storing them in dedicated collection for further submit. Since VOD enables communication only via S3/SFTP, it's required to use dedicated mechanism to actually trigger CSV/ZIP file creation and file placement in outbound directory. This will periodic call to Submit VR method will be scheduled once a day (with cron) which will in the end call VeevaAdapter with method createChangeRequest.Flow diagramStepsReceive the API requestValidate initial requestcheck if the Veeva crosswalk exists once there is an update on the profileotherwise it's required to prepare DCR to create new Veeva profileIf there is any formal attribute missing or incorrect: skip requestThen the DCR is mapped to Veeva Request by invoking mapper between HUB DCR → VEEVA model For mapping purpose below mapping table should be used If there is not proper LOV mapping between HUB and Veeva, default fallback should be set to question mark → ?  Once proper request has been created, it should be stored as a VeevaVRDetails entry in dedicated DCRRegistryVeeva collection to be ready for actually send via Submit VR job and for future tracing purposesPrepare return response for initial API request with below logicGenerate sample request after successful mongo insert →  generateResponse(dcrRequest, RequestStatus.REQUEST_ACCEPTED, null, null)Generate error when validation or exception →  generateResponse(dcrRequest, RequestStatus.REQUEST_FAILED, getErrorDetails(), null);Mapping HUB DCR → Veeva model Below table does not contain all new attributes which are new in Reltio. Only the most important ones were mentioned there.File STTM Stats_SG_HK_v3.xlsx contains full mapping requirements from Veeva OpenData to Reltio data model. It does contain full data mapping which should be covered in target DCR process for VOD.ReltioHUBVEEVAAttribute PathDetailsDCR Request pathDetailsFile NameField NameRequired for Add Request?Required for Change Request?DescriptionReference (RDM/LOV)NOTEHCON/AMongo Generated ID for this DCR | Kafka KEYonce mapping from HUB Domain DCRRequest take this from DCRRequestD.dcrRequestId: String, // HUB DCR request id - Mongo ID - required in ONEKEY servicechange_requestdcr_keyYYCustomer's internal identifier for this requestChange Requests comments extDCRCommentchange_requestdescriptionYYRequester free-text comments explaining the DCRtargetChangeRequest.createdBycreatedBychange_requestcreated_byYYFor requestor identificationN/Aif new objects - ADD, if veeva ID CHANGEchange_requestchange_request_typeYYADD_REQUEST or CHANGE_REQUESTN/Adepends on suggested changes (check use-cases)main entity object type HCP or HCOchange_requestentity_typeYNHCP or HCOEntityTypeN/AMongo Generated ID for this DCR | Kafka KEYchange_request_hcodcr_keyYYCustomer's internal identifier for this requestReltio Uri and Reltio Typewhen insert new profileentities.HCO.updateCrosswalk.type (Reltio)entities.HCO.updateCrosswalk.value (Reltio id)and refId.entityURIconcatenate Reltio:rvu44dmchange_request_hcoentity_keyYYCustomer's internal HCO identifierCrosswalks - VEEVA crosswalkwhen update on VEEVAentities.HCO.updateCrosswalk.type (VEEVA)entities.HCO.updateCrosswalk.value (VEEVA ID)change_request_hcovid__vYNVeeva ID of existing HCO to update; if blank, the request will be interpreted as an add requestconfiguration/entityTypes/HCO/attributes/OtherNames/attributes/Namefirst elementTODO - add new attributechange_request_hcoalternate_name_1__vYN????change_request_hcobusiness_type__vYNHCOBusinessTypeTO BE CONFIRMEDconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/FacilityTypeHCO.subTypeCodechange_request_hcpmajor_class_of_trade__vNNCOTFacilityTypeIn PforceRx - Account Type, more info: \n MR-9512\n -\n Getting issue details...\n STATUS\n configuration/entityTypes/HCO/attributes/Namenamechange_request_hcocorporate_name__vNYconfiguration/entityTypes/HCO/attributes/TotalLicenseBedsTODO - add new attributechange_request_hcocount_beds__vNYconfiguration/entityTypes/HCO/attributes/Email/attributes/Emailemail with rank 1emailschange_request_hcoemail_1__vNNconfiguration/entityTypes/HCO/attributes/Email/attributes/Emailemail with rank 2change_request_hcoemail_2__vNNconfiguration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.FAX with best rankphoneschange_request_hcofax_1__vNNconfiguration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.FAX with worst rankchange_request_hcofax_2__vNNconfiguration/entityTypes/HCO/attributes/StatusDetailTODO - add new attributechange_request_hcohco_status__vNNHCOStatusconfiguration/entityTypes/HCO/attributes/TypeCodetypecodechange_request_hcohco_type__vNNHCOTypeconfiguration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.OFFICE with best rankphoneschange_request_hcophone_1__vNNconfiguration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.OFFICE with worst rankchange_request_hcophone_2__vNNconfiguration/entityTypes/HCO/attributes/Phone/attributes/Numberphone type TEL.OFFICE with worst rankchange_request_hcophone_3__vNNconfiguration/entityTypes/HCO/attributes/CountryDCRRequest.countrychange_request_hcoprimary_country__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtyelements from COT specialtieschange_request_hcospecialty_1__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_10__vNNSpecialityconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_2__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_3__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_4__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_5__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_6__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_7__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_8__vNNconfiguration/entityTypes/HCO/attributes/ClassofTradeN/attributes/Specialtychange_request_hcospecialty_9__vNNconfiguration/entityTypes/HCO/attributes/Website/attributes/WebsiteURLfirst elementwebsiteURLchange_request_hcoURL_1__vNNconfiguration/entityTypes/HCO/attributes/Website/attributes/WebsiteURLN/AN/Achange_request_hcoURL_2__vNNHCP N/AMongo Generated ID for this DCR | Kafka KEYchange_request_hcpdcr_keyYYCustomer's internal identifier for this requestReltio Uri and Reltio Typewhen insert new profileentities.HCO.updateCrosswalk.type (Reltio)entities.HCO.updateCrosswalk.value (Reltio id)and refId.entityURIconcatenate Reltio:rvu44dmchange_request_hcpentity_keyYYCustomer's internal HCP identifierconfiguration/entityTypes/HCP/attributes/CountryDCRRequest.countrychange_request_hcpprimary_country__vYYCrosswalks - VEEVA crosswalkwhen update on VEEVAentities.HCO.updateCrosswalk.type (VEEVA)entities.HCO.updateCrosswalk.value (VEEVA ID)change_request_hcpvid__vNYconfiguration/entityTypes/HCP/attributes/FirstNamefirstNamechange_request_hcpfirst_name__vYNconfiguration/entityTypes/HCP/attributes/MiddlemiddleNamechange_request_hcpmiddle_name__vNNconfiguration/entityTypes/HCP/attributes/LastNamelastNamechange_request_hcplast_name__vYNconfiguration/entityTypes/HCP/attributes/NicknameTODO - add new attributechange_request_hcpnickname__vNNconfiguration/entityTypes/HCP/attributes/Prefixprefixchange_request_hcpprefix__vNNHCPPrefixconfiguration/entityTypes/HCP/attributes/SuffixNamesuffixchange_request_hcpsuffix__vNNconfiguration/entityTypes/HCP/attributes/Titletitlechange_request_hcpprofessional_title__vNNHCPProfessionalTitleconfiguration/entityTypes/HCP/attributes/SubTypeCodesubTypeCodechange_request_hcphcp_type__vYNHCPTypeconfiguration/entityTypes/HCP/attributes/StatusDetailTODO - add new attributechange_request_hcphcp_status__vNNHCPStatusconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/FirstNameTODO - add new attributechange_request_hcpalternate_first_name__vNNconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/LastNameTODO - add new attributechange_request_hcpalternate_last_name__vNNconfiguration/entityTypes/HCP/attributes/AlternateName/attributes/MiddleNameTODO - add new attributechange_request_hcpalternate_middle_name__vNN??TODO - add new attributechange_request_hcpfamily_full_name__vNNTO BE CONFRIMEDconfiguration/entityTypes/HCP/attributes/DoBbirthYearchange_request_hcpbirth_year__vNNconfiguration/entityTypes/HCP/attributes/Credential/attributes/Credentialby rank 1TODO - add new attributechange_request_hcpcredentials_1__vNNTO BE CONFIRMEDconfiguration/entityTypes/HCP/attributes/Credential/attributes/Credential2TODO - add new attributechange_request_hcpcredentials_2__vNNIn reltio there is attribute but not usedconfiguration/entityTypes/HCP/attributes/Credential/attributes/Credential3TODO - add new attributechange_request_hcpcredentials_3__vNN                            "uri": "configuration/entityTypes/HCP/attributes/Credential/attributes/Credential",configuration/entityTypes/HCP/attributes/Credential/attributes/Credential4TODO - add new attributechange_request_hcpcredentials_4__vNN                            "lookupCode": "rdm/lookupTypes/Credential",configuration/entityTypes/HCP/attributes/Credential/attributes/Credential5TODO - add new attributechange_request_hcpcredentials_5__vNNHCPCredentials                            "skipInDataAccess": false??TODO - add new attributechange_request_hcpfellow__vNNBooleanReferenceTO BE CONFRIMEDconfiguration/entityTypes/HCP/attributes/Gendergenderchange_request_hcpgender__vNNHCPGender?? Education ??TODO - add new attributechange_request_hcpeducation_level__vNNHCPEducationLevelTO BE CONFRIMEDconfiguration/entityTypes/HCP/attributes/Education/attributes/SchoolNameTODO - add new attributechange_request_hcpgrad_school__vNNconfiguration/entityTypes/HCP/attributes/Education/attributes/YearOfGraduationTODO - add new attributechange_request_hcpgrad_year__vNN??change_request_hcphcp_focus_area_10__vNNTO BE CONFRIMED??change_request_hcphcp_focus_area_1__vNN??change_request_hcphcp_focus_area_2__vNN??change_request_hcphcp_focus_area_3__vNN??change_request_hcphcp_focus_area_4__vNN??change_request_hcphcp_focus_area_5__vNN??change_request_hcphcp_focus_area_6__vNN??change_request_hcphcp_focus_area_7__vNN??change_request_hcphcp_focus_area_8__vNN??change_request_hcphcp_focus_area_9__vNNHCPFocusArea??change_request_hcpmedical_degree_1__vNNTO BE CONFRIMED??change_request_hcpmedical_degree_2__vNNHCPMedicalDegreeconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyby rank from 1 to 100specialtieschange_request_hcpspecialty_1__vYNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_10__vNNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_2__vNNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_3__vNNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_4__vNNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_5__vNNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_6__vNNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_7__vNNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_8__vNNconfiguration/entityTypes/HCP/attributes/Specialities/attributes/Specialtyspecialtieschange_request_hcpspecialty_9__vNNSpecialtyconfiguration/entityTypes/HCP/attributes/WebsiteURLTODO - add new attributechange_request_hcpURL_1__vNNADDRESSMongo Generated ID for this DCR | Kafka KEYchange_request_addressdcr_keyYYCustomer's internal identifier for this requestReltio Uri and Reltio Typewhen insert new profileentities.HCP OR HCO.updateCrosswalk.type (Reltio)entities.HCP OR HCO.updateCrosswalk.value (Reltio id)and refId.entityURIconcatenate Reltio:rvu44dmchange_request_addressentity_keyYYCustomer's internal HCO/HCP identifierattributes/Addresses/attributes/COMPANYAddressIDaddress.refIdchange_request_addressaddress_keyYYCustomer's internal address identifierattributes/Addresses/attributes/AddressLine1addressLine1change_request_addressaddress_line_1__vYNattributes/Addresses/attributes/AddressLine2addressLine2change_request_addressaddress_line_2__vNNattributes/Addresses/attributes/AddressLine3addressLine3change_request_addressaddress_line_3__vNNN/AN/AAchange_request_addressaddress_status__vNNAddressStatusattributes/Addresses/attributes/AddressTypeaddressTypechange_request_addressaddress_type__vYNAddressTypeattributes/Addresses/attributes/StateProvincestateProvincechange_request_addressadministrative_area__vYNAddressAdminAreaattributes/Addresses/attributes/Countrycountrychange_request_addresscountry__vYNattributes/Addresses/attributes/Citycitychange_request_addresslocality__vYYattributes/Addresses/attributes/Zip5zipchange_request_addresspostal_code__vYNattributes/Addresses/attributes/Source/attributes/SourceNameattributes/Addresses/attributes/Source/attributes/SourceAddressIDwhen VEEVA map VEEVA ID to sourceAddressIdchange_request_addressvid__vNYmap fromrelationTypes/OtherHCOtoHCOAffiliationsor relationTypes/ContactAffiliationsThis will be HCP.ContactAffiliation or HCO.OtherHcoToHCO affiliationMongo Generated ID for this DCR | Kafka KEYchange_request_parenthcodcr_keyYYCustomer's internal identifier for this requestHCO.otherHCOAffiliations.relationUriorHCP.contactAffiliations.relationUri (from Domain model)information about Reltio Relation IDchange_request_parenthcoparenthco_keyYYCustomer's internal identifier for this relationshipRELATION IDKEY entity_key from HCP or HCO (start object)change_request_parenthcochild_entity_keyYYChild Identifier in the HCO/HCP fileSTART OBJECT IDendObject entity uri mapped to refId.EntityURITargetObjectIdKEY entity_key from HCP or HCO (end object, by affiliation)change_request_parenthcoparent_entity_keyYYParent identifier in the HCO fileEND OBJECT IDchanges in Domain model mappingmap Reltion.Source.SourceName - VEEVAmap Relation.Source.SourceValue - VEEVA IDadd to Domain modelmap if relation is from VEEVA ID change_request_parenthcovid__vNYstart object entity type change_request_parenthcoentity_type__vYNattributes/RelationType/attributes/PrimaryAffiliationif is primaryTODO - add new attribute to otherHcoToHCOchange_request_parenthcois_primary_relationship__vNNBooleanReferenceHCO_HCO or HCP_HCOchange_request_parenthcohierarchy_type__vRelationHierarchyTypeattributes/RelationType/attributes/RelationshipDescriptiontype from affiliationbased on ContactAffliation or OtherHCOToHCO affiliationI think it will be 14-Emploted for HCP_HCOand 4-Manages for HCO_HCObut maybe we can map from affiliation.typechange_request_parenthcorelationship_type__vYNRelationTypeMongo collectionAll DCRs initiated by the dcr-service-2 API and to be sent to Veeva will be stored in Mongo in new collection DCRRegistryVeeva. The idea is to gather all DCRs requested by the client through the day and schedule SubmitVR process that will communicate with Veeva adapter.Typical use case: Client requests 3 DCRs during the daySubmitVR contains the schedule that gathers all DCRs with NEW status created during the day and using VeevaAdapter to push requests to S3/SFTP.In this store we are going to keep both types of DCRs:\ninitiated by PforceRX - PFORCERX_DCR("PforceRxDCR")\ninitiated by Reltio SubmitVR - SENDTO3PART_DCR("ReltioSuggestedAndSendTo3PartyDCR");\nStore class idea:_id this is the same ID that was assigned to DCR in dcr-service-2 VeevaVRDetails\n@Document("DCRRegistryVEEVA")\n@JsonIgnoreProperties(ignoreUnknown = true)\n@JsonInclude(JsonInclude.Include.NON_NULL)\ndata class VeevaVRDetails(\n    @JsonProperty("_id")\n    @Id\n    val id: String? = null,\n    val type: DCRType,\n    val status: DCRRequestStatusDetails,\n    val createdBy: String? = null,\n    val createTime: ZonedDateTime? = null,\n    val endTime: ZonedDateTime? = null,\n    val veevaRequestTime: ZonedDateTime? = null,\n    val veevaResponseTime: ZonedDateTime? = null,\n    val veevaRequestFileName: String? = null\n    val veevaResponseFileName: String? = null    val veevaResponseFileTime: ZonedDateTime? = null\n    val country: String? = null,\n    val source: String? = null,\n    val extDCRComment: String? = null, // external DCR Comment (client comment)\n    val trackingDetails: List<DCRTrackingDetails> = mutableListOf(),\n\n    RAW FILE LINES mapped from DCRRequestD to Veeva model\n    val veevaRequest:\n            val change_request_csv: String,\n            val change_request_hcp_csv: String\n            val change_request_hco_csv: List<String>\n            val change_request_address_csv: List<String>\n            val change_request_parenthco_csv: List<String>\n\n    RAW FILE LINES mapped from Veeva Response model\n    val veevaResponse:\n            val change_request_response_csv: String,\n            val change_request_response_hcp_csv: String\n            val change_request_response_hco_csv: List<String>\n            val change_request_response_address_csv: List<String>\n            val change_request_response_parenthco_csv: List<String>\n)\nMapping Reltio canonical codes → Veeva source codesThere are a couple of steps performed to find out a mapping for canonical code from Reltio to source code understood by VOD. Below steps are performed (in this order) once a code is found. Veeva Defaults Configuration is stored in mdm-config-registry > config-hub/stage_apac/mdm-veeva-dcr-service/defaultsThe purpose of these logic is to select one of possible multiple source codes on VOD end for a single code on COMPANY side (1:N). The other scenario is when there is no actual source code for a canonical code on VOD end (1:0), however this is usually covered by fallback code logic.There are a couple of files, each containing source codes for a specific attribute. The ones related to HCO.Specialty and HCP.Specialty have logic which selects proper code.Usually there are constructed as a three column CSV: Country, Canonical Code, Source CodeFor specific Country we're looking for Canonical code and then we're sending Source code as it is (no trim required)Examples: IN;SP.PD;PD → PD source code will be sent to VODRDM lookups with RegExpThe main logic which is used to find out proper source code for canonical code. We're using codes configured in RDM, however mongo collection LookupValues are used. For specific canonical code (code) we looking for sourceMappings with source = VOD. Often country is embedded within source code so we're applying regexpConfig (more in Veeva Fallback section) to extract specific source code for particular country.Veeva FallbackConfiguration is stored in mdm-config-registry > config-hub/stage_apac/mdm-veeva-dcr-service/fallbackAvailable for a couple of attributes: hco-cot-facility-type.csvCOTFacilityTypehco-specialty.csvCOTSpecialtyhco-type-code.csvHCOTypehcp-specialty.csvHCPSpecialtyhcp-title.csvHCPTitlehcp-type-code.csvHCPSubTypeCodeUsually files are constructed as a one column CSV, however the logic for extracting source code may be differentSource code is extracted using RegExp for each parameter. Check application.yml for this mdm-veeva-dcr-server component - mdm-inboud-services > mdm-veeva-dcr-service/src/main/resources/application.yml to find out proper line and extract code sent to VOD.Example value for hco-specialty-type.csv: IN_?Regexp value for HCP.specialty: regexpConfig > HCPSpecialty: ^COUNTRY_(.+)$Source code sent to VOD for India country: "?" (only question mark without country prefix)TriggersTrigger actionComponentActionDefault timeREST callmdm-veeva-dcr-service: POST /dcr → veevaDCRService.createChangeRequest(request)Creates DCR and stores it in collection without actual send to Veeva. API synchronous requests - realtimeDependent componentsComponentUsageDCR Service 2Main component with flow implementationHub StoreDCR and Entities Cache "
},
{
"title": "Veeva: create DCR method (submitVR)",
"pageID": "386796763",
"pageLink": "/pages/viewpage.action?pageId=386796763",
"content": "DescriptionGather all stored DCR entities in DCRRegistryVeeva collection (status = NEW) and sends them via S3/SFTP to Veeva OpenData (VOD). This method triggers CSV/ZIP file creation and file placement in outbound directory. This method is triggered from cron which invokes VeevaDCRRequestSender.sendDCRs() from the Veeva DCR Service Flow diagramStepsReceive the API request via scheduled trigger, usually every 24h (senderConfiguration.schedulerConfig.fixedDelay) at specific time of day (senderConfiguration.schedulerConfig.initDelay)All DCR entities (VeevaVRDetails) with status NEW are being retrieved from DCRRegistryVeeva collection Then VeevaCreateChangeRequest object is created which aggregates all CSV content which should be placed in actual CSV files. Each object contains only DCRs specific for countryEach country has its own S3/SFTP directory structure as well as dedicated SFTP server instanceOnce CSV files are created with header and content, they are packed into single ZIP fileFinally ZIP file is placed in outbound S3 directoryIf file was placedsuccessfuly - then VeevaChangeRequestACK status = SUCCESSotherwise - then VeevaChangeRequestACK status = FAILURE and process endsFinally, status of VeevaVRDetails entity in DCRRegistryVeeva collection is updated and set to SENT_TO_VEEVATriggersTrigger actionComponentActionDefault timeTimer (cron)mdm-veeva-dcr-service: VeevaDCRRequestSender.sendDCRs()Takes all unsent entities (status = NEW) from Veeva collection and actually puts file on S3/SFTP directory via veevaAdapter.createDCRsUsually every 24h (senderConfiguration.schedulerConfig.fixedDelay) at specific time of day (senderConfiguration.schedulerConfig.initDelay)Dependent componentsComponentUsageDCR Service 2Main component with flow implementationHub StoreDCR and Entities Cache "
},
{
"title": "Veeva: generate DCR Change Events (traceVR)",
"pageID": "379329922",
"pageLink": "/pages/viewpage.action?pageId=379329922",
"content": "DescriptionThe process is responsible for gathering DCR responses from Veeva OpenData (VOD). Responses are provided via CSV/ZIP files placed on S3/SFTP server in inbound directory which are specific for each country. During this process files should be retrieved, mapped from VOD to HUB DCR model and published to Kafka topic to be properly processed by DCR Service 2, Veeva: process DCR Change Events.Flow diagramSource: LucidStepsMethod is trigger via cron, usually every 24h (traceConfiguration.schedulerConfig.fixedDelay) at specific time of day (traceConfiguration.schedulerConfig.initDelay)For each country, each inbound directory in scanned for ZIP filesEach ZIP files (<country>_DCR_Response_<Date>.zip) should be unpacked and processed. A bunch of CSV files should be extracted. Specifically:change_request_response.csv → it's a manifest file with general information in specific columnsdcr_key → ID of DCR which was established during DCR request creation entity_key → ID of entity in Reltio, the same one we provided during DCR request creationentity_type → type of entity (HCO, HCP) which is being modified via this DCRresolution → has information whether DCR was accepted or rejected. Full list of values is below.resolution valueDescriptionCHANGE_PENDINGThis change is still processing and hasn't been resolvedCHANGE_ACCEPTEDThis change has been accepted without modificationCHANGE_PARTIALThis change has been accepted with additional changes made by the steward, or some parts of the change request have been rejectedCHANGE_REJECTEDThis change has been rejected in its entiretyCHANGE_CANCELLEDThis change has been cancelledchange_request_type change_request_type valueDescriptionADD_REQUESTwhether DCR caused to create new profile in VOD with new vid__v  (Veeva id)CHANGE_REQUESTjust update of existing profile in VOD with existing and already known vid__v (Veeva id)change_request_hcp_response.csv - contains information about DCR related to HCPchange_request_hco_response.csv - contains information about DCR related to HCOchange_request_address_response.csv - contains information about DCR related to addresses which are related to specific HCP or HCOchange_request_parenthco_response.csv - contains information about DCR which correspond to relations between HCP and HCO, and HCO and HCOFile with log: <country>_DCR_Request_Job_Log.csv can be skipped. It does not contain any useful information to be processed automaticallyFor all DCR responses from VOD, we need to get corresponding DCR entity (VeevaVRDetails)from collection DCRRegistryVeeva should be selected. In general, specific response files are not that important (VOD profiles updates will be ingested to HUB via ETL channel) however when new profiles are created (change_request_response.csv.change_request_type = ADD_REQUEST) we need to extract theirs Veeva ID. We need to deep dive into change_request_hcp_response.csv or change_request_hco_response.csv to find vid__v (Veeva ID) for specific dcr_key This new Veeva ID should be stored in VeevaDCREvent.vrDetails.veevaHCPIdsIt should be further used as a crosswalk value in Reltio:entities.HCO.updateCrosswalk.type (VEEVA)entities.HCO.updateCrosswalk.value (VEEVA ID)Once data has been properly mapped from Veeva to HUB DCR model, new VeevaDCREvent entity should be created and published to dedicated Kafka topic $env-internal-veeva-dcr-change-events-inPlease be advised, when the status of resolution is not final (CHANGE_ACCEPTED, CHANGE_REJECTED, CHANGE_CANCELLED, CHANGE_PARTIAL) we should not sent event to DCR-service-2Then for each successfully processed DCR entity (VeevaVRDetails) in Mongo  DCRRegistryVeeva collection should be updated Veeva CSV: resolutionMongo: DCRRegistryVeeva Entity: VeevaVRDetails.status: DCRRequestStatusDetailsTopic: $env-internal-veeva-dcr-change-events-inEvent: VeevaDCREvent.vrDetails.vrStatusTopic: $env-internal-veeva-dcr-change-events-inEvent: VeevaDCREvent.vrDetails.vrStatusDetailCHANGE_PENDINGstatus should not be updated at all (stays as SENT)do not send events to DCR-service-2 do not send events to DCR-service-2 CHANGE_ACCEPTEDACCEPTEDCLOSEDACCEPTEDCHANGE_PARTIALACCEPTEDCLOSEDACCEPTEDresolutionNotes / veevaComment should contain more information what was rejected by VEEVA DSCHANGE_REJECTEDREJECTEDCLOSEDREJECTEDCHANGE_CANCELLEDREJECTEDCLOSEDREJECTEDOnce files are processed, ZIP file should be moved from inbound to archive directoryEvent VeevaDCREvent Model\ndata class VeevaDCREvent (val eventType: String? = null,\n                          val eventTime: Long? = null,\n                          val eventPublishingTime: Long? = null,\n                          val countryCode: String? = null,\n                          val dcrId: String? = null,\n                          val vrDetails: VeevaChangeRequestDetails)\n\ndata class VeevaChangeRequestDetails (\n    val vrStatus: String? = null, - HUB CODEs\n    val vrStatusDetail: String? = null, - HUB CODEs\n    val veevaComment: String? = null,\n    val veevaHCPIds: List<String>? = null,\n    val veevaHCOIds: List<String>? = null)\nTriggersTrigger actionComponentActionDefault timeIN Timer (cron)mdm-veeva-dcr-service: VeevaDCRRequestTrace.traceDCRs()get DCR responses from S3/SFTP directory, extract CSV files from ZIP file and publish events to kafka topicevery <T> hourusually every 6h (traceConfiguration.schedulerConfig.fixedDelay) at specific time of day (traceConfiguration.schedulerConfig.initDelay)OUT Events on Kafka Topicmdm-veeva-dcr-service: VeevaDCRRequestTrace.traceDCRs()$env-internal-veeva-dcr-change-events-inVeevaDCREvent event published to topic to be consumed by DCR Service 2every <T> hourusually every 6h (traceConfiguration.schedulerConfig.fixedDelay) at specific time of day (traceConfiguration.schedulerConfig.initDelay)Dependent componentsComponentUsageDCR Service 2Main component with flow implementationHub StoreDCR and Entities Cache "
},
{
"title": "ETL Batches",
"pageID": "164470046",
"pageLink": "/display/GMDM/ETL+Batches",
"content": "DescriptionThe process is responsible for managing the batch instances/stages and loading data received from the ETL channel to the MDM system. The Batch service is a complex component that contains predefined JOBS, Batch Workflow configuration that is using the JOBS implementations and using asynchronous communication with Kafka topis updates data in MDM system and gathered the acknowledgment events. Mongo cache stores the BatchInstances with corresponding stages and EntityProcessStatus objects that contain metadata information about loaded objects.The below diagram presents an overview of the entire process. Detailed descriptions are available in the separated subpages.Flow diagramModel diagramStepsThe client is able to create a new instance of the batch using - Batch Controller: creating and updating batch instance flowOnce the batch instance is created client is able to load the data using - Bulk Service: loading bulk data flowDuring data load, the following process startsSending JOB - send data received from REST API to Kafka Stage topicsProcessing JOB - check the status for the specific load if all ACK were receivedSoftDeleting JOB - an optional job that is triggered at the end of the batch that was configured to use full file load - this starts the delta detection process and soft-deletes the objectsACK Collector - a streaming process that gathered events and updated Cache with the MDM response statusFor the support opposes the additional Clear Cache operation is exposedTriggersDescribed in the separated sub-pages for each process.Dependent componentsComponentUsageBatch ServiceMain component with flow implementationManagerAsynchronous events processingHub StoreDatastore and cache"
},
{
"title": "ACK Collector",
"pageID": "164469774",
"pageLink": "/display/GMDM/ACK+Collector",
"content": "DescriptionThe flow process the ACK response messages and updates the cache. Based on these responses the Processing flow is checking the Cache status and is blocking the workflow by the time all responses are received. This process updates the "status" attribute with the MDM system response and the "updateDateMDM" with the corresponding update timestamp. Flow diagramStepsManager publisher ACK responses to the Batch ACK queue for each processed object through batch-serviceACK Collector process in the streaming mode the events and update the status in the cache. The following attributes are updated:status - MDM status that HUB received after entity/relationship object was created/updated/soft-deletedupdateDateMDM - timestamp once the ACK was receivedentityId - corresponding entity/relation URI that is given by the MDM systemerrorCode - optional  MDM error code once the status is failederrorMessage - optional MDM error message that contains detailed description once the status is failed. TriggersTrigger actionComponentActionDefault timeIN Events incoming batch-service:AckProcessorupdate the cache based on the ACK responserealtimeDependent componentsComponentUsageBatch ServiceThe main componentManagerAsync route with ACK responsesHub StoreCache"
},
{
"title": "Batch Controller: creating and updating batch instance",
"pageID": "164469788",
"pageLink": "/display/GMDM/Batch+Controller%3A+creating+and+updating+batch+instance",
"content": "DescriptionThe batch controller is responsible for managing the Batch Instances. The service allows to creation of a new batch instance for the specific Batch, create a new Stage in the batch and update stage with the statistics. The Batch controller component manages the batch instances and checks the validation of the requests. Only authorized users are allowed to manage specific batches or stages. Additionally, it is not possible to START multiple instances of the same batch in one time. Once batch is started Client should load the data and at the end complete the current batch instance. Once user creates new batch instance the new unique ID is assigned, in the next request user has to use this ID to update the workflow. By default, once the batch instance is created all stages are initialized with status PENDING. Batch controller also manages the dependent stages and is marking the whole batch as COMPLETED at the end. Flow diagramStepsThe first step that the User has to make is the initialization of the new Batch Instance, during this operation process starts and a new unique ID is assigned.Using the Unique ID and available Stage name user is able to start the STAGE. (by design users have to access only to the first "Loading" stage, but this can be changed in the configuration if required. In this request, the Body objects may be empty. It will cause the initialization of this specific STAGE - changed to STARTED.At that moment user is able to load data - the description is available in the next flow - Bulk Service: loading bulk dataAfter data loading User has to complete the STAGE. In this request, the Body objects have to be delivered. In the request, the User provides the statistics about the load or optionally errors.if there are errors during loading - BatchStageStatus = FAILEDif the load ended with success -    BatchStageStatus = COMPLETEDIn the end, the user should trigger the GET batch instance details operation and wait for the Batch completion ( after Loading stage all dependent stages are started)To get more details about next internal steps check:Processing JOBSending JOBSoftDeleting JOBACK CollectorTriggersTrigger actionComponentActionDefault timeAPI requestbatch-service.RestBatchControllerRouteUser initializes the new batch instance, updates the STAGE, saves the statistics, and completes the corresponding STAGE.User is able to get batch instance details and wait for the load completionmuser API request dependent, triggered by an external clientDependent componentsComponentUsageBatch ServiceThe main component that exposes the REST APIHub StoreBatch Instances Cache"
},
{
"title": "Batches registry",
"pageID": "234695693",
"pageLink": "/display/GMDM/Batches+registry",
"content": "There is a list of batches configured from 01.02.2022.ONEKEYTenantCountrySource NameBatch NameStageDetailsEMEAAlgeriaONEKEYONEKEY_DZHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)TunisiaONEKEYONEKEY_TNHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)MoroccoONEKEYONEKEY_MAHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)GermanyONEKEYONEKEY_DEHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)France, AD, MCONEKEYONEKEY_FRHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)France (DOMTOM) = RE,MQ,GP,PF,YT,GF,PM,WF,MU,NCONEKEYONEKEY_PFHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)ItalyONEKEYONEKEY_ITHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)SpainONEKEYONEKEY_ESHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)Turkey ONEKEYONEKEY_TRHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)Denmark (Plus Faroe Islands and Greenland)ONEKEYONEKEY_DKHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)PortugalONEKEYONEKEY_PTHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)RussiaONEKEYONEKEY_RUHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)APACAustraliaONEKEYONEKEY_AUHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)New ZealandONEKEYONEKEY_NZHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)South KoreaONEKEYONEKEY_KRHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)AMERCanadaONEKEYONEKEY_CAHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)BrazilONEKEYONEKEY_BRHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)MexicoONEKEYONEKEY_MXHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)Argentina/UruguayONEKEYONEKEY_ARHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)PFORCE_RXTenantCountrySource NameBatch NameStageDetailsAMERBrazilPFORCERX_ODSPFORCERX_ODSHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)MexicoArgentina/UruguayCanadaAPACJapan PFORCERX_ODSPFORCERX_ODSHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)Australia /New ZealandIndiaSouth KoreaEMEASaudi ArabiaPFORCERX_ODSPFORCERX_ODSHCPLoadingHCOLoadingRelationLoadingIt will be incremental file load and dont need to enable the soft-delete process for entities (HCP, HCO) and relations (HCP-HCO, HCO-HCO)GermanyFranceItalySpainRussiaTurkey DenmarkPortugalGRVTenantCountrySource NameBatch NameStageEMEAGRGRVGRVHCPLoadingITFRESRUTRSADKGLFOPTAMERCAGRVGRVHCPLoadingBRMXARAPACAUGRVGRVHCPLoadingNZINJPKRGCPTenantCountrySource NameBatch NameStageEMEAGRGCPGCPHCPLoadingITFRESRUTRSADKGLFOPTAMERCAGCPGCPHCPLoadingBRMXARAPACAUGCPGCPHCPLoadingNZINJPKRENGAGETenantCountrySource NameBatch NameStageAMERCAENGAGEENGAGEHCPLoadingHCOLoadingRelationLoading"
},
{
"title": "Bulk Service: loading bulk data",
"pageID": "164469786",
"pageLink": "/display/GMDM/Bulk+Service%3A+loading+bulk+data",
"content": "DescriptionThe bulk service is responsible for loading the bundled data using REST API as the input and Kafka stage topics as the output. This process is strictly connected to the Batch Controller: creating and updating batch instance flow, which means that the Client should first initialize the new batch instance and stage. Using API requests data is loaded to the next processing stages. Flow diagramStepsThe batch controller part is described in the Batch Controller: creating and updating batch instance flow.After the User starts the Loading stage it is now possible to load the data. (Loading STAGE part on the diagram)Depending on the batch workflow configuration it is possible to load entities or relationsPOST /entities - create entities in MDMPATCH /entities - updated entities in MDM, in that case, the partialOverride option is usedPOST /relations - create relations in MDMPATCH /tags - add tags to objects in MDMDELETE /tags - remove tags from objects in MDMPOST /entities/_merge - merges 2 entities in MDMPOST /entities/_unmerge -  unmerges entity B from entity A in MDMAdditionally, based on the configuration, there is a limitation of the objects in one call - by default user is allowed to send the list of 25 objects in one API call.The response is the HTTP 200 code with an empty body.The API Loading stage is the synchronous operation, the rest of the process uses the Kafka Topics and all data is shared to the MDM system in an asynchronous way. After Loading all data through the specific STAGE, the Client should complete the STAGE, this will trigger the next processing steps described on the ELT Batch sub-pages. TriggersTrigger actionComponentActionDefault timeAPI requestbatch-service.RestBulkControllerRouteClients send the data to the bulk service.user API request dependent, triggered by an external clientDependent componentsComponentUsageBatch ServiceThe main component that exposes the REST APIHub StoreBatch Instances Cache"
},
{
"title": "Clear Cache",
"pageID": "164469784",
"pageLink": "/display/GMDM/Clear+Cache",
"content": "DescriptionThis flow is used to clear mongo cache (removes records from batchEntityProcessStatus) for specified batch name, object type and entity type. Optional list of countries (comma-separated) allows filtering by countries.Flow diagramStepsclient sends the request to the batch controller with specified parameters like batchName, objectType and entityType example: {{API_URL_BATCH_CONTROLLER}}/{{batchName}}/_clearCache?objectType=RELATION&entityType=configuration/relationTypes/ContactAffiliationsexample: {{API_URL_BATCH_CONTROLLER}}/{{batchName}}/_clearCache?objectType=ENTITY&entityType=configuration/entityTypes/HCP&countries=GB,IE,FR,PT,DKthe service checks if client is allowed to do this action - has appropriate role CLEAR_CACHE_BATCH the service process client request and executes mongo query with specified parametersthe service returns number of removed records.TriggersTrigger actionComponentActionDefault timeAPI Requestbatch-service.RestBatchControllerRouteExternal client calls request to clear the cacheuser API request dependent, triggered by an external clientDependent componentsComponentUsageBatch ServiceThe main component that exposes the REST APIHub StoreBatch entities/relations cache"
},
{
"title": "Clear Cache by croswalks",
"pageID": "282663410",
"pageLink": "/display/GMDM/Clear+Cache+by+croswalks",
"content": "DescriptionThis flow is used to clear mongo cache (removes records from batchEntityProcessStatus) for specified batch name, sourceId type or/and valueFlow diagramStepsclient sends the request to the batch controller with specified parameters like batchName, sourceId type or/and valueexample: PATCH {{API_URL_BATCH_CONTROLLER}}/{{batchName}}/_clearCachebody: \n{\n "sourceId": [\n {\n "type": "ABC",\n "value": "TEST:123"\n },\n {\n "type": "DEF"\n },\n {\n "value": "TEST:456"\n }\n ]\n}\nthe service checks if client is allowed to do this action - has appropriate role CLEAR_CACHE_BATCH the service process client request and executes mongo query with specified parametersthe service returns number of removed records.TriggersTrigger actionComponentActionDefault timeAPI Requestbatch-service.RestBatchControllerRouteExternal client calls request to clear the cacheuser API request dependent, triggered by an external clientDependent componentsComponentUsageBatch ServiceThe main component that exposes the REST APIHub StoreBatch entities/relations cache"
},
{
"title": "PATCH Operation",
"pageID": "355371021",
"pageLink": "/display/GMDM/PATCH+Operation",
"content": "DescriptionEntity PATCH (UpdateHCP/UpdateHCO/UpdateMCO) operation differs slightly from the standard POST (CreateHCP/CreateHCO/CreateMCO) operation:PATCH operation includes contributor crosswalk verification - MDM is searched to make sure that the updated entity exists (to prevent creations of singleton profiles)PATCH operation uses Reltio's partialOverride parameter. It allows sending only a portion of attributes (usually only the ones that have changed since the last load). Existing attribute values that have not been provided in the request will not be wiped from MDM.AlgorithmPATCH operation logic consists of following steps:For each entity in the bundle (depending on the configuration, usually around 50 requests):Find contributor crosswalk - if contributor crosswalk cannot be determined, throw an exceptionSearch all the contributor crosswalks in MDM Hub Cache - single search requestsFilter results - assign each found entity to corresponding crosswalkIf no entity found for a crosswalk - perform a fallback search by crosswalk using MDM APIFor every entity where contributor crosswalk was not found in above steps, generate a "Not Found" message.For remaining entities, perform the CreateHCP/CreateHCO/CreateMCO operation.Merge response from CreateHCP/CreateHCO/CreateMCO with "Not Found" messages in correct order, return."
},
{
"title": "Processing JOB",
"pageID": "164469780",
"pageLink": "/display/GMDM/Processing+JOB",
"content": "DescriptionThe flow checks the Cache using a poller that executes the query each <T> minutes. During this processing, the count is decreasing until it reaches 0. The following query is used to check the count of objects that were not delivered. The process ends if the query return 0 objects - it means that we received ACK for each object and it is possible to go to the next dependent stage. "{'batchName': ?0 ,'sendDateMDM':{ $gt: ?1 }, '$or':[ {'updateDateMDM':{ $lt: ?1 } }, { 'updateDateMDM':{ $exists : false } } ] }"Using Mongo query there is a possibility to find what objects are still not processed. In that case, the user should provide batchName==" currently loading batch " and use the date that is the batch start date. Flow diagramStepsThe process starts once the activation criteria are successful, which means that the dependent JOB is COMPLETED.Using trigger mechanism data is polled from Cache and counted.If the number of processed entities is equal to 0 process endselse the process is triggered after <T> minutes. If this is the last stage in the current batch workflow statistics are calculated.  ( it means that there may be multiple processing jobs in one workflow, but only the last one is calculating all gathered statistics )The LAST stage will always contain the following staistisc: Each statistic is divided into 3 sections using "/" separator1 - entities or relations depending on the loaded object2 - object type, it can be HCO/HCP/MCO or any relationType loaded3 - name{entities | relations}/{object type}/receivedCount - number of objects received {entities | relations}/{object type}/skippedCount - number of objects skipped because of delta detection{entities | relations}/{object type}/failedCount -  number of objects that got "failed" status from MDM{entities | relations}/{object type}/updatedCount - number of objects that got "updated" status from MDM{entities | relations}/{object type}/createdCount - number of objects that got "created" status from MDM{entities | relations}/{object type}/notFoundCount - number of objects that got "notFound"  status from MDM (may occur once using partialOverride operation){entities | relations}/{object type}/deletedCount - number of objects that got "deleted" status from MDM (may occur once object is endDated in MDM and the object updated alreade deleted entity){entities | relations}/{object type}/softDeletedCount - number of objects removed by the SoftDeleting JOB - used only during full files load.Example statistics:TriggersTrigger actionComponentActionDefault timeThe previous dependent JOB is completed. Triggered by the Scheduler mechanismbatch-service:ProcessingJobTriggers mongo and checks the number of objects that are not yet processed.every 60 secondsDependent componentsComponentUsageBatch ServiceThe main component with the Processing JOB implementationHub StoreThe cache that stores all information about the loaded objects"
},
{
"title": "Sending JOB",
"pageID": "164469778",
"pageLink": "/display/GMDM/Sending+JOB",
"content": "DescriptionThe JOB is responsible for sending the data from the Stage Kafka topics to the manager component. During this process data is checked, the checksum is calculated and compared to the previous state, os only the changes are applied to MDM. The Cache - Batch data store, contains multiple metadata attributes like sourceIngetstionDate - the time once this entity was recently shared by the Client, and the ACK response status (create/update/failed) The Checksum is calculation is skipped for the "failed" objects. It means there is no need to clear the cache for the failed objects, the user just needs to reload the data. The JOB is triggered once the previous dependent job is completed or is started. There are two mode of dependences between Loading STAGE and Sending STAGE(hard) dependentStages - the Sending stage will start once the previous dependent JOB is COMPLETEDsoftDependentStages - the Sending stage will start in parallel to the Loading stage. It means that all loaded dates will be intimately sent to Reltio. The purpose of hard dependency is the case when the user has to Load HCP/HCO and Relations objects. The sending of relation has to start after HCP and HCO load is COMPLETED. The process finishes once the Batch stage queue is empty for 1 minute (no new events are in the queue).The following query is used to retrieve processing object from cache. Where the batchName is the corersponding Batch Instance, and sourceId is the information about loaded source crosswalk.{'batchName': ?0, {'sourceId.type': ?1, 'sourceId.value': ?2,'sourceId.sourceTable': ?3 } }Flow diagramStepsThe process starts once the activation criteria are successful, which means that the (hard) dependent JOB is COMPLETED or soft dependent JOB STARTED.All entities or relations are polled from stage topicif objects exist on topic for each:the current state is retrieved from Batch Cache if this is a new one the object is initialized with all required attributes and checksumthe checksum is calculated (for failed status checksum calculation is skipped)the sourceIngestionDate is updated to the current date (required to track the object and generate soft-deletes once the entity was not received)updateDate,sendDateMDM attributes are updated and "deleted" flag is set to falseonce no new objects are on stage topic process is finished. The STAGE is updated with COMPLETED status.TriggersTrigger actionComponentActionDefault timeThe previous dependent JOB is completed. Triggered by the Scheduler mechanismbatch-service:SendingJobGet entries from stage topic, saved data in mongo and create/updates profiles using Kafka producer (asynchronous channel)once the dependence JOB is completedDependent componentsComponentUsageBatch ServiceThe main component with the Sending JOB implementationHub StoreThe cache that stores all information about the loaded objects"
},
{
"title": "SoftDeleting JOB",
"pageID": "164469776",
"pageLink": "/display/GMDM/SoftDeleting+JOB",
"content": "DescriptionThis JOB is responsible for the soft-delete process for the full file loads. Batches that are configured with this JOB have to always deliver the full set of data. The process is triggered at the end of the workflow and soft-delete objects in the MDM system. The following query is used to check how many objects are going to be removed and also to get all these objects and send the soft-delete requests. {'batchName': ?0, 'deleted': false, 'objectType': 'ENTITY OR RELATION', 'sourceIngestionDate':{ $lt: ?1 } }Once the object is soft deleted "deleted" flag is changed to "true"Using the mongo query there is a possibility to check what objects were soft-deleted by this process. In that case, the Administrator should provide the batchName=" currently loading batch" and the deleted parameter =" true".The process removes all objects that were not delivered in the current load, which means that the "SourceIngestionDate" is lower than the "BatchStartDate".It may occur that the number of objects to soft-delete exceeds the limit, in that case, the process is aborted and the Administrator should verify what objects are blocked and notify the client. The production limit is a maximum of 10000 objects in one load.Flow diagramSteps The process starts once the activation criteria are successful, which means that the dependent JOB is COMPLETED.Using a query in the first step the process counts the number of entities to be soft-deletedIf the limit is exceeded the process is aborted and status with reason is saved in Cache. The limit is a safety switch in case if we get a corrupted file (empty or partial). It prevents from deleting all MDM  profiles in such cases.in the "RelationsUnseenDeletion" STAGE the following information is saved:statistics:maxDeletesLimit - currently configured limitentitiesUnseenResultCount - number of entities that process indicated to soft-deleteerrors:errorCode - 400 errorMessage - Entities delete limit exceeded, aborting soft delete sending.example:Else the Cache is queried and returned objects are sent Manager for removalIn the loop, all objects are queried from Cache and the data is sent to the corresponding Kafka topic. During this operation, the cache is updated and MDMRequest is preparedMDMRequest:entityTypecountryCrosswalktypevaluedeleteDate - current timestampCache attributes to update:updateDate = current time - cache object update timedeleteDateMDM = current time - date that contains the delete date of corresponding objectsendDateMDM = current time - date that contains the time when the profile was sent to MDMdeleted = true - flag indicates that the profile was soft-deleted2023-07 Update: Set Soft-Delete Limit by CountryDeletingJob now allows additional configuration:\ndeletingJob:\n "TestDeletesPerCountryBatch":\n "EntitiesUnseenDeletion":\n maxDeletesLimit: 20\n queryBatchSize: 5\n reltioRequestTopic: "local-internal-async-all-testbatch"\n reltioResponseTopic: "local-internal-async-all-testbatch-ack"\n>     maxDeletesLimitPerCountry:\n> enabled: true\n> overrides:\n> CA: 10\n> BR: 30\nIf maxDeletesLimitPerCountry.enabled == true (default false):soft-deletes limit in maxDeletesLimit is applied per country. Number of records to delete is fetched from Cache for each country, and if any of the countries exceeds the limit, the batch is failed with appropriate error message.soft-deletes limit can be changed for each country using the maxDeletesLimitPerCountry.overrides map. If country is not present in the overrides, default value from maxDeletesLimit is consideredTriggersTrigger actionComponentActionDefault timeThe previous dependent JOB is completed. Triggered by the Scheduler mechanismbatch-service:AbstractDeletingJob (DeletingJob/DeletingRelationJob)Triggers mongo and soft-delete profiles using Kafka producer (asynchronous channel)once the dependence JOB is completedDependent componentsComponentUsageBatch ServiceThe main component with the SoftDeleting JOB implementationManagerAsynchronous channel Hub StoreThe cache that stores all information about the loaded objects"
},
{
"title": "Event filtering and routing rules",
"pageID": "164470034",
"pageLink": "/display/GMDM/Event+filtering+and+routing+rules",
"content": "At various stages of processing events can be filtered based on some configurable criteria. This helps to lessen the load on the Hub and client systems, as well as simplifies processing on client side by avoiding the types of events that are of no interest to the target application. There are three places where event filtering is applied:Reltio Subscriber filters events based on their (Reltio-defined) typeNucleus Subscriber filters out duplicate events, based on event type and entityUriEvent Publisher filters events based on their contentEvent type filteringEach event received from SQS queue has a "type" attribute. Reltio Subscriber has a "allowedEventTypes" configuration parameter (in application.yml config file) that lists event types which are processed by application. Currently, complete list of supported types is:ENTITY_CREATEDENTITY_REMOVEDENTITY_CHANGEDENTITY_LOST_MERGEENTITIES_MERGEDENTITIES_SPLITTEDAn event that does not match this list is ignored, and "Message skipped" entry is added to a log file.Please keep in mind that while it is easy to remove an event type from this list in order to ignore it, adding new event type is a whole different story it might not be possible without changes to the application source code.Duplicate detection (Nucleus)There's an in-memory cache maintained that stores entityUri and type of an event previously sent for that uri. This allows duplicate detection. The cache is cleared after successful processing of the whole zip file.Entity data-based filteringEvent Publisher component receives events from internal Kafka topic. After fetching current Entity state from Reltio (via MDM Integration Gateway) it imposes few additional filtering rules based on fetched data. Those rules are:Filtering based on Country that entity belongs to. This is based on value of ISO country code, extracted from Country attribute of an entity. List of allowed codes is maintained as "activeCountries" parameter in application.yml config file.Filtering based on Entity type. This is controlled by "allowedEntityTypes" configuration parameter, which currently lists two values: "HCP" and "HCO". Those values are matched against "entityType" attribute of Entity (prefix "configuration/entityTypes/" is added automatically, so it does not need to be included in configuration file)Filtering out events that have empty "targetEntity" attribute such events are considered outdated, plus they lack some mandatory information that would normally be extracted from targetEntity, such as originating country and source system. They are filtered out because Hub would not be able to process them correctly anyway.Filtering out events that have value mismatch between "entitiesURIs" attribute of an event and "uri" attribute of targetEntity for all event types except HCP_LOST_MERGE and HCO_LOST_MERGE. Uri mismatch may arise when EventPublisher is processing events with significant delay (e.g. due to downtime, or when reprocessing events) Event Publisher might be processing HCP_CHANGED (HCO_CHANGED) event for an Entity that was merged with another Entity since then, so HCP_CHANGED event is considered outdated, and we are expecting HCP_LOST_MERGE event for the same Entity.This filter is controlled by eventRouter.filterMismatchedURIs configuration parameter, which takes Boolean values (yes/no, true/false)Filtering out events based on timestamps. When HCP_CHANGED or HCO_CHANGED event arrives that has "eventTime" timestamp older than "updatedTime" of the targetEntity, it is assumed that another change for the same entity has already happened and that another event is waiting in the queue to be processed. By ignoring current event Event Publisher is ensuring that only the most recent change is forwarded to client systems.This filter is controlled by eventRouter.filterOutdatedChanges configuration parameter, which can take Boolean values (yes/no, true/false)Event routingPublishing Hub supports multiple client systems subscribing for Entity change events. Since those clients might be interested in different subset of Events, the event routing mechanism was created to allow configurable, content-based routing of the events to specific client systems. Routing mechanics consists of three main parts:Kafka topics each client system can has one or more dedicated topics where events of interest for that system are publishedMetadata extraction as one of the processing steps, there are some pieces of information extracted from the Event and related Entity and put in processing context (as headers), so they can be easily accessed.Configurable routing rules Event Publisher's configuration file contains the whole section for defining rules that facilitates Groovy scripting language and the metadata.Available metadata is described in the table below.Table 10. Routing headersHeaderTypeValuesSource FieldDescriptioneventTypeStringfull simplenoneType of an event. "full" means Event Sourcing mode, with full targetEntity data. "simple" is just an event with basic data, without targetEntityeventSubtypeStringHCP_CREATED, HCP_CHANGED, ….event.eventTypeFor the full list of available event subtypes is specified in MDM Publishing Hub Streaming Interface document.countryStringCN FRevent.targetEntity.attributes .Country.lookupCodeCountry of origin for the EntityeventSourceArray of String["OK", "GRV"]event. targetEntity.crosswalks.typeArray containing names of all the source systems as defined by Reltio crosswalksmdmSourceString["RELTIO", NUCLEUS"]NoneSystem of origin for the Entity.selfMergeBooleantrue, falseNoneIs the event "self-merge"? Enables filtering out merges on the fly.Routing rules configuration is found in eventRouter.routingRules section of application.yml configuration file. Here's an example of such rule: Elements of this configuration are described below.id unique identifier of the ruleselector snippet of Groovy code, which should return true or false depending on whether or not message should be forwarded to the destination.destination name of the topic that message should be sent to.Selector syntax can include, among the others, the elements listed in the table below.Table 11. Selector syntaxElementExampleDescriptioncomparison operators==, !=, <, >Standard Groovy syntaxboolean operators&&,set operatorsin, intersectMessage headersexchange.in.headers.countrySee Table 10 for list of available headers. "exchange.in.headers" is the standard prefix that must be used do access themFull syntax reference can be found in Apache Camel documentation: http://camel.apache.org/groovy.html . The limitation here is that the whole snippet should return a single boolean value.Destination name can be literal, but can also reference any of the message headers from Table 10, with the following syntax: "
},
{
"title": "FLEX COV Flows",
"pageID": "172301002",
"pageLink": "/display/GMDM/FLEX+COV+Flows",
"content": ""
},
{
"title": "Address rank callback",
"pageID": "164470175",
"pageLink": "/display/GMDM/Address+rank+callback",
"content": "The Address Rank Callback is used only in the FLEX COV environment to update the Rank attribute on Addresses. This process sends the callback to Reltio only when the specific source exists on the profile. The Rank is used then by the Bussiness Team or Data Stewards in Reltio or by the downstream FLEX system. Address Rank Callback is triggered always when getEntity operation is invoked. The purpose of this process is to synchronize Reltio with correct address rank sort order.Currently the functionality is configured only for US Trade Instance. Below is the diagram outlining the whole process. Process steps description:Event Publisher receives events from internal Kafka topic and calls MDM Gateway API to retrieve latest state of Entity from Reltio.Event Publisher internal user is authorized in MDM Manager to check source, country and appropriate access roles. MDM Manager invokes get entity operation in Reltio. Returned JSON is then added to the Address Rank sort process, so the client will always get entity with sorted address rank order, but only when this feature is activated in configuration.When Address Rank Sort process is activated, each address in entity is sorted. In this case "AddressRank" and "BestRecord" attributes are set. When AddressRank is equal to "1" BestRecord attribute will always have "1" value.When Address Rank Callback process is activated, relation operation is invoked in Reltio. The Relation Request object contains Relation object for each sorted address. Each Relation will be created with "AddrCalc" source, where the start object is current entity id and the end object is id of the Location entity. In that case relation between entity and Location is created with additional rank attributes. There is no need to send multiple callback requests every time when get entity operation is invoked, so the Callback operation is invoked only when address rank sort order have changed.Entity data is stored in MongoDB NOSQL database, for later use in Simple mode (publication of events that entityURI and require client to retrieve full Entity via REST API).For every Reltio event there are two Publishing Hub events created: one in Simple mode and one in Event Sourcing (full) mode. Based on metadata, and Routing Rules provided as a part of application configuration, the list of the target destinations for those events is created. Event is sent to all matched destinations."
},
{
"title": "DEA Flow",
"pageID": "164470009",
"pageLink": "/display/GMDM/DEA+Flow",
"content": "This flow processes DEA files published by GIS Team to S3 Bucket. Flow steps are presented on the sequence diagram below.  Process steps description:DEA files are uploaded to AWS S3 storage bucket to the appropriate directory intended only for DEA files.Batch Channel component is monitoring S3 location and processes the files uploaded to it.Folder structure for DEA is divided on "inbound" and "archive" directories. Batch Channel component is polling data from inbound directory, after successful processing the file is copied to "archive directory"Files downloaded from S3 is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.DEA file load Start Time is saved for the specific load as loadStartDate.Each line in file is parsed in Batch Channel component and mapped to the dedicated DEA object. DEA file is saved in Fixed Width Data Format, in that case one DEA record is saved in one line in the file so there is no need to use record aggregator. Each line has specified length, each column has specified star and end point number in the row.BatchContext is downloaded from MongoDB for each DEA record. This context contains DEA crosswalk ID, line from file, MD5 checksum, last modification date, delete flag. When BatchContext is empty it means that this DEA record is initially created such object is send to Kafka Topic. When BatchContext is not empty the MD5 form the source DEA file is compared to the MD5 from the BatchContext (mongo). If MD5 checksums are equals such object is skipped, otherwise such object is send to Kafka Topic. For each modified object, lastModificationDate is updated in Mongo it is required to detected delete records as the final step.Only when record MD5 checksum is not changed, DEA record will be published to Kafka topic dedicated for events for DEA records. They will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO Post section.TransactionLog Service is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between Batch Channel and MDM Manager components.After DEA file is successfully processed, DEA delete record processor is started. From Mongo Database each record with lastModificationDate less than loadStartDate and delete flag equal to false is downloaded. When the result count is grater that 1000, delete record processor is stoped it is a protector feature in case of wrong file uploade which can generate multiple unexpected DEA profiles deletion. Otherwise, when result count is less than 1000, each record from MongoDB is parsed and send to Kafka Topic with deleteDate attribute on crosswalk. Then they will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO Post section. Profiles created with deleteDate attribute on crosswalk are soft deleted in Reltio.Finally DEA file is moved to archive subtree in S3 bucket."
},
{
"title": "FLEX Flow",
"pageID": "164470035",
"pageLink": "/display/GMDM/FLEX+Flow",
"content": "This flow processes FLEX files published by Flex Team to S3 Bucket. Flow steps are presented on the sequence diagram below. Process steps description:FLEX files are uploaded to AWS S3 storage bucket to the appropriate directory intended only for FLEX files.Batch Channel component is monitoring S3 location and processes the files uploaded to it.Folder structure for FLEX is divided on "inbound" and "archive" directories. Batch Channel component is polling data from inbound directory, after successful processing the file is copied to "archive directory"Files downloaded from S3 is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.Each line in file is parsed in Batch Channel component and mapped to the dedicated FLEX object. FLEX file is saved in CSV Data Format, in that case one FLEX record is saved in one line in the file so there is no need to use record aggregator. The first line in the file is always the header line with column names, each next line is the FLEX records with "," (comma character) delimiter. The most complex thing in FLEX mapping is Identifiers mapping. When Flex records contain "GROUP_KEY" ("Address Key") attribute it means that Identifiers saved in "Other Active IDs" will be added to FlexID.Identifiers nested attributes. "Other Active IDs" is one line string with key value pairs separated by "," (comma character), and key-value delimiter ":" (colon character). Additionally for each type of customer Flex identifier is always saved in FlexID section.FLEX record will be published to Kafka topic dedicated for events for FLEX records. They will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO Post section.TransactionLog Service is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between Batch Channel and MDM Manager components.After FLEX file is successfully processed, it is moved to archive subtree in S3 bucket."
},
{
"title": "HIN Flow",
"pageID": "164469995",
"pageLink": "/display/GMDM/HIN+Flow",
"content": "This flow processes HIN files published by HIN Team to S3 Bucket. Flow steps are presented on the sequence diagram below. Process steps description:HIN files are uploaded to AWS S3 storage bucket to the appropriate directory intended only for HIN files.Batch Channel component is monitoring S3 location and processes the files uploaded to it.Folder structure for HIN is divided on "inbound" and "archive" directories. Batch Channel component is polling data from inbound directory, after successful processing the file is copied to "archive directory"Files downloaded from S3 is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.Each line in file is parsed in Batch Channel component and mapped to the dedicated HIN object. HIN file is saved in Fixed Width Data Format, in that case one HIN record is saved in one line in the file so there is no need to use record aggregator. Each line has specified length, each column has specified star and end point number in the row.HIN record will be published to Kafka topic dedicated for events for FLEX records. They will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO Post section.TransactionLog Service is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between Batch Channel and MDM Manager components.After HIN file is successfully processed, it is moved to archive subtree in S3 bucket."
},
{
"title": "SAP Flow",
"pageID": "164469997",
"pageLink": "/display/GMDM/SAP+Flow",
"content": "This flow processes SAP files published by GIS system to S3 Bucket. Flow steps are presented on the sequence diagram below. Process steps description:SAP files are uploaded to AWS S3 storage bucket to the appropriate directory intended only for SAP files.Batch Channel component is monitoring S3 location and processes the files uploaded to it.Important note: To facilitate fault tolerance the Batch Channel component will be deployed on multiple instances on different machines. However, to avoid conflicts, such as processing the same file twice, only one instance is allowed to do the processing at any given time. This is implemented via standard Apache Camel mechanism of Route Policy, which is backed by Zookeeper distributed key-value store. When a new file is picked up by Batch Channel instance, the first processing step would be to create a key in Zookeeper, acting as a lock. Only one instance will succeed in creating the key, therefore only one instance will be allowed to proceed.Folder structure for SAP is divided on "inbound" and "archive" directories. Batch Channel component is polling data from inbound directory, after successful processing the file is copied to "archive directory"Files downloaded from S3 is processed in streaming mode. The processing of the file can be stared before full download of the file. Such solution is dedicated to speed up processing of the big files, because there is no need to wait until the file will be fully downloaded.Each line in file is parsed in Batch Channel component and mapped to the dedicated SAP object. In case of SAP files where one SAP record is saved in multiple lines in the file there is need to use SAPRecordAggregator. This class will read each line of the SAP file and aggregate each line to create full SAP record. Each line starts with Record Type character, the separator for SAP is "~" (tilde character). Only lines that start with the following character are parsed and create full SAP record:1 Header4 Sales OrganizationE LicenseC NotesWhen header line is parsed Account Type attribute is checked. Only SAP records with "Z031" type are filtered and post to Reltio.BatchContext is downloaded from MongoDB for each SAP record. This context contains Start Date for SAP and 340B Identifiers. When BatchContext is empty current timestamp is saved for each of the Identifiers, otherwise the start date for the identifiers is changed for the one saved in the Mongo cache. This Start Date always must be overwritten with the initial dates from mongo cache.Aggregated SAP record will be published to Kafka topic dedicated for events for SAP records. They will be processed by MDM Manager component. The first step is authorization check to verify if this event was produced by Batch Channel component with appropriate source name and country and roles. Then the standard process for HCO creation is stared. The full description of this process is in HCO POST section.TransactionLog Service is an additional component for managing transaction logs. The role of this component is to save each successful or unsuccessful flow in transaction log. Additionally each log is saved in MongoDB to create a full report from current load and to correlate record flow between Batch Channel and MDM Manager components.After SAP file is successfully processed, it is moved to archive subtree in S3 bucket."
},
{
"title": "US overview",
"pageID": "164470019",
"pageLink": "/display/GMDM/US+overview",
"content": ""
},
{
"title": "Generic Batch",
"pageID": "164469994",
"pageLink": "/display/GMDM/Generic+Batch",
"content": "The generic batch offers the functionality of configuring processes of HCP/HCO data loading from text files (CSV) into MDM.The loading processes are defined in the configuration, without the need for changes in the implementation.Description of the processDefinition of single data flow Configuration (definition) od each data flow contains:Data flow name Definition of data files. Each file is described by: File name patternMappings for each column Columns in file definition are described by: Column index and name Column type (string, date, number, fixed value)Attribute of the entity to which the value from the column is mappedConditional mapping parametersAmazon S3 resources and local temporary directory configurationAmazon S3 input directory Amazon S3 archive directory Local temporary directory Kafka topic names for sending asynchronous requests Mongo database connection parameters (common for all flow definitions) Currently defined data flows:Flow nameCountrySource systemInput files (with names required after preprocessing stage)Detailed columns to entity attribute mapping fileTH HCPTHCICRhcpEntitiesfileNamePattern: '(TH_Contact_In)+(\\.(?i)(txt))$'hcpAddressesfileNamePattern: '(TH_Contact_Address_In_JOINED)+(\\.(?i)(txt))$'hcpSpecialtiesfileNamePattern: '(TH_Contact_Speciality_In)+(\\.(?i)(txt))$'mdm-gateway\\batch-channel\\src\\main\\resources\\flows.ymlSA HCPSALocalMDMhcpEntitiesfileNamePattern: '(KSA_HCPs)+(\\.(?i)(csv))$'mdm-gateway\\batch-channel\\src\\main\\resources\\flows.yml"
},
{
"title": "Get Entity",
"pageID": "164470021",
"pageLink": "/display/GMDM/Get+Entity",
"content": "DescriptionOperation getEntity of MDM Manager fetches current state of OV from MongoDB store.The detailed process flow is shown below.Flow diagramGet EntityStepsClient sends HTTP request to MDM Manager endpoint.Kong Gateway receives requests and handles authentication.If the authentication succeeds, the request is forwarded to MDM Manager component.MDM Manager checks user permissions to call getEntity operation and the correctness of the request.If user's permissions are correct, MDM Manager proceeds with searching for the specified entity by id.MDM Manager checks user profile configuration for getEntity operation to determine whether to return results based on MongoDB state or call Reltio directly.For clients configured to use MongoDB if the entity is found, then its status is checked. For entities with LOST_MERGE status parentEntityId attribute is used to fetch and return the parent Entity instead. This is in line with default Reltio behavior since MDM Manager is supposed to mirror Reltio.TriggersTrigger actionComponentActionDefault timeREST callManager: GET /entity/{entityId}get specific objects from MDM systemAPI synchronous requests - realtimeDependent componentsComponentUsageManagerget Entities in MDM systems"
},
{
"title": "GRV & GCP events processing",
"pageID": "164470032",
"pageLink": "/pages/viewpage.action?pageId=164470032",
"content": "ContactsVendorContactMAP/DEG API supportMatej.Dolanc@COMPANY.comThis flow processes events from GRV and GCP systems distributed through Event Hub. Processing is split into three stages. Since each stage is implemented as separate Apache Camel route and separated from other stages by persistent message store (Kafka), it is possible to turn each stage on/off separately using Admin Console.SQS subscriptionFirst processing stage is receiving data published by Event Hub from Amazon SQS queues, which is done as shown on diagram below.Figure 5. First processing stageProcess steps description:Data changes in GRV and GCP are captured by Event Hub and distributed via queues to MAP Channel components using SQS queues with names:eh-out-reltio-gcp-update-<env_code>eh-out-reltio-gcp-batch-update-<env_code>eh-out-reltio-grv-update-<env_code>Events pulled from SQS queue are published to Kafka topic as a way of persisting them (allowing reprocessing) and to do event prioritizing and control throughput to Reltio. The following topics are used:<env_code>-gw-internal-gcp-events-raw<env_code>-gw-internal-grv-events-rawTo ensure correct ordering of messages in Kafka, there is a custom message key generated. It is a concatenation of market code and unique Contact/User id.Once the message is published to Kafka, it is confirmed in SQS and deleted from the queue.Enrichment with DEG dataFigure 6. Second processing stageSecond processing stage is focused on getting data from DEG system. The control flow is presented below.Process steps description:MAPChannel receives events from Kafka topic on which they were published in previous stage.MAPChannel filters events based on country activation criteria events coming from not activated countries are skipped. A list of active countries is controlled by configuration parameter, separately for each source (GRV, GCP);Next, MapChannel calls DEG REST services (INT2.1 or INT 2.2 depending on whether it is a GRV or GCP event) to get detailed information about changed record. DEG always returns current state of GRV and GCP records.Data from DEG is published to Kafka topic (again, as a way of persisting them and separating processing stages). The topics used are:<env_code>-gw-internal-gcp-events-deg<env_code>-gw-internal-grv-events-degAgain, custom message key (which is a concatenation of market code and unique Contact/User idCreating HCP entitiesLast processing stage involves mapping data to Reltio format and calling MDM Gateway API to create HCP entities in Reltio. Process overview is shown below.Figure 7. Third processing stageProcess steps description:MAPChannel receives events from Kafka topic on which they were published in previous stage.MAPChannel filters events based on country activation criteria, events coming from not activated countries are skipped. A list of active countries is controlled by configuration parameter, separately for each source (GRV, GCP) this is exactly the same parameter as in previous stage.MapChannel maps data from GCP/GRV to HCP:EMEA mappingGLOBAL mappingValidation status of mapped HCP is checked if it matches a configurable list of inactive statuses, then deleteCrosswalk operation is called on MDM Manager. As a result entity data originating from GCP/GRV is deleted from Reltio.Otherwise, Map Channel calls REST operation POST /hcp on MDM Manager (INT4.1) to create or replace HCP profile in Reltio. MDM Manager handles complexity of the update process in Reltio.Processing events from multiple sources and prioritizationAs mentioned in previous sections, there are three different SQS queues that are populated with events by Event Hub. Each of them is processed by a separate Camel Route, allowing for some flexibility and prioritizing one queue above others. This can be accomplished by altering consumer configuration found in application.yml file. Relevant section of mentioned file is shown below. Queue eh-out-reltio-gcp-batch-update-dev has 15 consumers (and therefore 15 processing threads), while two remaining queues have only 5 consumers each. This allows faster processing of GCP Batch events.The same principle applies to further stages of the processing, which use Kafka endpoints. Again, there is a configuration section dedicated to each of the internal Kafka topic that allows tuning the pace of processing. "
},
{
"title": "HUB UI User Guide",
"pageID": "302701919",
"pageLink": "/display/GMDM/HUB+UI+User+Guide",
"content": "This page contains the complete user guide related to the HUB UI.Please check the sub-pages to get details about the HUB UI and usage.Start with Main Page - HUB Status - main pageA handful of information that may be helpful when you are using HUB UI:UI URL: https://api-emea-prod-gbl-mdm-hub.COMPANY.com/ui-emea-prod/ (there is no need to know all URLs, click one, and in the top right corner you can easily switch between tenants).How to connect to UI and gain access to all features - UI Connect Guide(INTERNAL USAGE only by HUB Admins) UI role names and standards - Add new role and add users to the UIIf you want to add any new features to the HUB UI please send your suggestions to the HUB Team: DL-ATP_MDMHUB_SUPPORT@COMPANY.com"
},
{
"title": "HUB Admin",
"pageID": "302701923",
"pageLink": "/display/GMDM/HUB+Admin",
"content": "All the subpages contain the user guide - how to use the hub admin tools.To gain access to the selected operation please read - UI Connect Guide"
},
{
"title": "1. Kafka Offset",
"pageID": "302703128",
"pageLink": "/display/GMDM/1.+Kafka+Offset",
"content": "DescriptionThis tab is available to a user with the MODIFY_KAFKA_OFFSET management role.Allows you to reset the offset for the selected topic and group.Kafka ConsumerPlease turn off your Kafka Consumer before executing this operation, it is not possible to manage the ACTIVE consumer groupRequired parametersGroup ID - the Kafka Consumer group that is connected to the topicTopic - The Kafka topic name that the user wants to manageDetailsThe offset parameter can take one of three values:earliest - reset the consumer group to the beginning of kafka topic - use this to read all events one more timelatest - reset the consumer group to the end of kafka topic - use this to skip all events and set consumer group at the end of the topic.shift by - allows to move consumer group by specific ammount to events. negative number (e.g -1000) - shifts the consumer group by 1000 events to the left - means you will get 1000 events more  positive number (e.g. 1000) - shifts the consumer group by 1000 events to the right - means you will get 1000 events less Use Case - you want to read 1000 events.First reset offest to latests - LAG will be 0Then shift by (-1000) - LAG will be 1000 eventsdate - allows to set the consumer group in a specific date, usefull when you want to read events since specific day. View"
},
{
"title": "10. Jobs Manager",
"pageID": "337846274",
"pageLink": "/display/GMDM/10.+Jobs+Manager",
"content": "DescriptionThis page is available to users that scheduled the JOBAllows you to check the current status of an asynchronous operation Required parametersJob Type  choose a JOB to check the statusDetailsThe page shows the statuses of jobs for each operation.Click the Job Type and select the business operation.In the table below all the jobs for all users in your AD group are displayed. You can track the jobs and download the reports here.Click the Refresh view button to refresh the pageClick the icon to download the report.View"
},
{
"title": "2. Partials",
"pageID": "302703134",
"pageLink": "/display/GMDM/2.+Partials",
"content": "DescriptionThis tab is available to the user with the LIST_PARTIALS role to manage the precallback service. It allows you to download a list of partials - these are events for which the need to change the Reltio has been detected and their sending to output topics has been suspended. The operation allows you to specify the limit of returned records and to sort them by the time of their occurrence.HUB ADMINUsed only internally by MDM HUB ADMINSRequired parametersN/A - by default, you will get all partial entities.DetailsReturn timestamp instead - mark as true to get  date format instead of the duration of partial in minutesReturn epoch millis- mark as true to get EPOCH timestamp instead of date formatLimit - put a number to limit the number of resultsSort - change the sort orderView"
},
{
"title": "3. HUB Reconciliation",
"pageID": "302703130",
"pageLink": "/display/GMDM/3.+HUB+Reconciliation",
"content": "DescriptionThis tab is available to the user with the reconciliation service management role - RECONCILE and RECONCILE_COMPLEXThe operation accepts a list of identifiers for which it is to be performed. It allows you to trigger a reconciliation task for a selected type of object:relationsentitiespartialsDivided into 2 sections:TOP - Simple JOBS - simple query where input is the entity URIBOTTOM - Complex jobs - complex query that schedules Airflow JOB.Simple JOBS:Required parametersN/A - by default generate CHANGE events and skip entity when it is in REMOE/INACTIVE/LOST_MERGE state. In that case, we only push CHANGE events. DetailsParameterDefault valueDescriptionforcefalseSend an event to output topics even when a partial update is detected or the checksum is the same.push lost mergefalseReconcile event with LOST_MERGE statuspush inactivatedfalseReconcile event with INACTIVE statuspush removedfalseReconcile event with REMOVE statusViewComplex JOBS:Required parametersCountries - list countries for which you want to generate CHANGE events. DetailsSimpleParameterDefault valueDescriptionforcefalseSend an event to output topics even when a partial update is detected or the checksum is the same.Countries N/Alist of countries.e.g: CA, MXSourcesN/Acrosswalks names for which you want to generate the events.Object TypeENTITYgenerates events from ENTITY or RELATION objectsEntity Typedepend on object TypeCan be for ENTITY: HCP/HCO/MCO/DCRCan be for RELATION: input test in which you specify the relation e.g.: OtherHCOToHCOBatch limitN/Alimit the number of events - useful for testing purposesComplexParameterDefault valueDescriptionforcefalseSend an event to output topics even when a partial update is detectedEntity QueryN/APUT the MATCH query to get Mongo results and generate events. e.g.: { "status": "ACTIVE", "sources": "ONEKEY", "country": "gb" }Entities limitN/Alimit the number of events - useful for testing purposesRelation QueryN/APUT the MATCH query to get Mongo results and generate events. e.g.: { "status": "ACTIVE", "sources": "ONEKEY", "country": "gb" }Relation limitN/Alimit the number of events - useful for testing purposesView"
},
{
"title": "4. Kafka Republish Events",
"pageID": "302703132",
"pageLink": "/display/GMDM/4.+Kafka+Republish+Events",
"content": "DescriptionThis page is available to users with the publisher manager role -RESEND_KAFKA_EVENT and RESEND_KAFKA_EVENT_COMPLEXAllows you to resend events to output topics. It can be used in two modes: simple and complex.The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.Simple modeRequired parametersCountries - list countries for which you want to generate CHANGE events. DetailsIn this mode, the user specifies values for defined parameters:ParameterDefault valueDescriptionSelect moderepublish CHANGE eventsnote:when you mark 'republish CHANGE events' - the process will generate CHANGE events for all entities that are ACTIVE, and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.when you mark 'republish CREATE events' - the process will generate CREATE events for all entities that are ACTIVE, and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.The difference between these 2 modes is, in one we generate CHANGEs in the second CREATE events (depending if whether this is IDL generation or not)CountriestrueList of countries for which the task will be performedSourcesfalseList of sources for which the task will be performedObject typetrueObject type for which operation will be performed, available values: Entity, RelationReconciliation targettrueOutput kafka topick namelimittrueLimit of generated eventsmodification time fromfalseEvents with a modification date greater than this will be generatedmodification time tofalseEvents with a modification date less than this will be generatedViewComplex modeRequired parametersEntities query or  Relation queryDetails      In this mode, the user himself defines the Mongo query that will be used to generate eventsParameterRequiredDescriptionSelect moderepublish CHANGE eventsnote:when you mark 'republish CHANGE events' - the process will generate CHANGE events for all entities that are ACTIVE, and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.when you mark 'republish CREATE events' - the process will generate CREATE events for all entities that are ACTIVE, and will check if the entity is LOST_MERGE - then will generate LOST_MERGED events, DELETED - then will generate REMOVED events, INACTIVE - then will generate INACTIVATED events.The difference between these 2 modes is, in one we generate CHANGEs in the second CREATE events (depending if whether this is IDL generation or not)Entities querytrueResend entities Mongo queryEntities limitfalseResend entities limitRelation querytrueResend relations Mongo queryRelations limittrueResend relations limitReconciliation targettrueOutput kafka topick nameView"
},
{
"title": "5. Reltio Reindex",
"pageID": "337846264",
"pageLink": "/display/GMDM/5.+Reltio+Reindex",
"content": "DescriptionThis page is available to users with the reltio reindex role - REINDEX_ENTITIESAllows you to schedule Reltio Reindex JOB. It can be used in two modes: query and file.The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.Required parametersSpecify Countries in query mode or file with entity uris in file mode. Detailsquery ParameterDescriptionCountriesList of countries for which the task will be performedSourcesList of sources for which the task will be performedEntity typeObject type for which operation will be performed, available values: HCP/HCO/MCO/DCRBatch limitAdd if you want to limit the reindex to the specific number - helpful with testing purposesfileInput fileFile format: CSV Encoding: UTF-8Column headers: - N/AInput file example123entities/E0pV5Xmentities/1CsgdXN4entities/2O5RmRiViewReltio Reindex details:HUB executes Reltio Reindex API with the following default parameters:ParameterAPI Parameter nameDefault ValueReltio detailed descriptionUI detailsEntity typeentityTypeN/AIf provided, the task restricts the reindexing scope to Entities of specified type.User can specify  the EntityType is search API and the URIS list will be generated. There is no need to pass this to Reltio API becouse we are using the generated URI listSkip entities countskipEntitiesCount0If provided, sets the number of Entities which are skipped during reindexing.-Entities limitentitiesLimitinfinityIf provided, sets the maximum number of Entities are reindexed-Updated sinceupdatedSinceN/ATimestamp in Unix format. If this parameter is provided, then only entities with greater or equal timestamp are reindexed. This is a good way to limit the reindexing to newer records.-Update entitiesupdateEntitiestrue If set to true, initiates update for Search, Match tables, History. If set to false, then no rematching, no history changes, only ES structures are updated.If set to true (default), in addition to refreshing the ElasticSearch index, the task also updates history, match tables, and the analytics layer (RI). This ensures that all indexes and supporting structures are as up-to-date as possible. As explained above, however, triggering all these activities may decrease the overall performance level of the database system for business work, and overwhelm the event streaming channels. If set to false, the task updates ElasticSearch data only. It does not perform rematching, or update history or analytics. These other activities can be performed at different times to spread out the performance impact.-Check crosswalk consistencycheckCrosswalksConsistencyfalseIf true, this will start a task to check if all crosswalks are unique before reindexing data. Please note, if entitiesLimit or distributed parameters have any value other than default, this parameter will be unavailableSpecify true to reindex each Entity, whether it has changed or not. This operation ensures that each Entity in the database is processed. Reltio does not recommend this option it decreases the performance of the reindex task dramatically, and may overload the server, which will interfere with all database operations.-URI listentityUrisgenerated list of URIS from UIOne or more entity URIs (separated by a comma) that you would like to process. For example: entities/<id1>, entities/<id2>.Reltio suggests to use 50-100K uris in one API request, this is Reltio limitation. Our process splits to 100K files if required. Based on the input files size one JOB from HUB end may produce multiple Reltio tasks.UI generates list of URIS from mongo querry or we are running the reindex with the input filesIgnore streaming eventsforceIgnoreInStreamingfalseIf set to true, no streaming events will be generated until after the reindex job has completed.-DistributeddistributedfalseIf set to true, the task runs in distributed mode, which is a good way to take advantage of a networked or clustered computing environment to spread the performance demands of reindexing over several nodes. -Job parts counttaskPartsCountN/A due to distributed=falseDefault value: 2The number of tasks which are created for distributed reindexing. Each task reindexes its own subset of Entities. Each task may be executed on a different API node, so that all tasks can run in parallel. Recommended value: the number of API nodes which can execute the tasks. Note: This parameter is used only in distributed mode ( distributed=true); otherwise, its ignored.-More detials in Reltio docs:https://docs.reltio.com/en/explore/get-going-with-apis-and-rocs-utilities/reltio-rest-apis/engage-apis/tasks-api/reindex-data-taskhttps://docs.reltio.com/en/explore/get-your-bearings-in-reltio/console/tenant-management-applications/tenant-management/jobs/creating-a-reindex-data-job"
},
{
"title": "6. Merge/Unmerge entities",
"pageID": "337846268",
"pageLink": "/pages/viewpage.action?pageId=337846268",
"content": "DescriptionThis page is available to users with the merge/unmerge role - MERGE_UNMERGE_ENTITIESAllows you to schedule Merge/Unmerge JOB. It can be used in two modes: merge or unmerge.The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.Required parametersfile with profiles to be merged or unmerged in the selected formatDetailsfileInput fileFile format: CSV Encoding: UTF-8more details here - Batch merge & unmergeView"
},
{
"title": "7. Update Identifiers",
"pageID": "337846270",
"pageLink": "/display/GMDM/7.+Update+Identifiers",
"content": "DescriptionThis page is available to users with the update identifiers role - UPDATE_IDENTIFIERSAllows you to schedule update identifiers JOB.The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.Required parametersfile with profiles to be updated in the selected formatDetailsfileInput fileFile format: CSV Encoding: UTF-8more details here - Batch update identifiersView"
},
{
"title": "8. Clear Cache",
"pageID": "337846272",
"pageLink": "/display/GMDM/8.+Clear+Cache",
"content": "DescriptionThis page is available to users with the ETL clear cache role - CLEAR_CACHE_BATCHThe cache is related to the Direct Channel ETL jobs:Docs: ETL Batch Channel and ETL BatchesAllows you to clear the ETL checksum cache. It can be used in three modes: query or by_source or file.The operation will trigger JOB  with selected parameters. In response, the user will receive an identifier that is used to check the status of the asynchronous operation in the 10. Jobs Manager tab.Query modeRequired parametersBatch name  - specify a batch name for which you want to clear the cacheObject type - ENTITY or RELATIONEntity type - e.g. configuration/relationTypes/Employment or configuration/entityTypes/HCPDetailsParameterDescriptionBatch nameSpecify a batch on which the clear cache will be triggeredObject type ENTITY or RELATIONEntity typeIf object type is ENTITY then e.g:configuration/entityTypes/HCOconfiguration/entityTypes/HCPIf object type is RELATION then e.g.:configuration/relationTypes/ContactAffiliationsconfiguration/relationTypes/EmploymentCountryAdd a country if required to limit the clear cache query Viewby_source modeRequired parametersBatch name  - specify a batch name for which you want to clear the cacheSource - crosswalk type and valueDetailsSpecify a batch name and click add a source to specify new crosswalks that you want to remove from the cache.Viewfile modeRequired parametersBatch name  - specify a batch name for which you want to clear cachefile with crosswalks to be cleared in ETL cache in the selected format for specified batchDetailsfileInput fileFile format: CSV Encoding: UTF-8more details here - Batch clear ETL data load cacheViewView"
},
{
"title": "9. Restore Raw Data",
"pageID": "356650113",
"pageLink": "/display/GMDM/9.+Restore+Raw+Data",
"content": "DescriptionThis page is available to users with the restore data role - RESTOREThe raw data contains data send to MDM HUB:Docs: Restore raw dataAllows you to restore raw (source) data on selected environmentThe operation will trigger asynchronous job with selected parameters.Restore entitiesRequired parametersSource environment - restore data from another environment eg from QA to DEV environment, the default is the currently logged in environmentEntity type  - restore data only for specified entity type: HCP, HCO, MCOOptional parametersCountries - restore data only for specified entity country, eq: GB, IE, BRSources - restore data only for specified entity source, eq: GRV, ONEKEYDate Time - restore data created after specified date timeViewRestore relationsRequired parametersSource environment - restore data from another environment eg from QA to DEV environment, the default is the currently logged in environmentOptional parametersCountries - restore data only for specified entity country, eq: GB, IE, BRSources - restore data only for specified entity source, eq: GRV, ONEKEYRelation types- restore data only for specified relation type, eg: configuration/relationTypes/OtherHCOtoHCOAffiliationsDate Time - restore data created after specified date timeView"
},
{
"title": "HUB Status - main page",
"pageID": "333155175",
"pageLink": "/display/GMDM/HUB+Status+-+main+page",
"content": "DescriptionThe UI is divided into the following sections:MENUContains links to Ingestion Services ConfigurationIngestion Services TesterHUB AdminHEADERShows the current tenant name, click to quickly change the tenant to a different one.Shows the logged-in user name. Click to log out. FOOTERLink to User GuideLink to Connect GideLink to the whole HUB documentationLink to the Get Help pageCurrently deployed versionClick to get the details about the CHANGELOGon PROD - released versionon NON-PROD- snapshot version - Changelog contains unreleased changes that will be deployed in the upcoming release to PROD.HUB Status dashboard is divided into the following sections:On this page you can check HUB processing status / kafka topics LAGs / API availability / Snowflake DataMart refresh. API (related to the Direct Channel)API Availability  - status related to HUB API (all API exposed by HUB e.g. based on EMEA PROD - EMEA PROD Services )Reltio READ operations performance and latency - for example, GET Entity operations (every operation that gets data from Reltio)Reltio WRITE operations performance and latency - for example, POST/PATCH Entity operations (every operation that changes data in Reltio)Batches (related to the ETL Batch Channel)Currently running batches and duration of completed batches.Currently running batches may cause data load and impact event processing visible in the dashboard below (inbound and outbound)Event Processing Shows information about events that we are processing to:Inbound - all updates made by HUB on profiles in Reltioshows the ETA based on the:ETL Batch Channel (loading and processing events into HUB from ETL)Direct Channel processing:loading ETL data to Reltioloading Rankings/Callbacks/HcoNames (all updates on profiles on Reltio)    Outbound - streaming channel processing (related to the Streaming channel)shows the ETA based on the:Streaming channel - all events processing starting from Reltio SQS queue, events currently processing by HUB Streaming channel microservices.DataMart (related to the Snowflake MDM Data Mart)The time when the last REGIONAL and GLOBAL Snowflake data martsShows the number of events that are still processing by HUB microservices and are not yet consumed by Snowflake Connector. "
},
{
"title": "Ingestion Services Configuration",
"pageID": "302701936",
"pageLink": "/display/GMDM/Ingestion+Services+Configuration",
"content": "DescriptionThis page shows configuration related to theData Quality checksSource Match CategorizationCleansing & FormattingAuto-FillsMinimum Viable Profile Check. Noise listsIdentifier noise listDuplicate identifier config.Choose a filter to switch between different entity types and use input boxes to filter results.Available filters:FilterDescriptionEntity TypeHCP/HCO/MCO - choose an entity type that you want to review and click SearchCategoryPick to limit the result and review only selected rulesCountryType a country code to limit the number of rules related to the specific countrySource Type a source to limit the number of rules related to the specific sourceQueryOpen Text filed -helps to limit the number of results when searching for specific attributes. Example case - put the "firstname" and click Search to get all rules that modify/use FirstName attribute.Audit filedComparison typeDateUse a combination of these 3 attributes to find rules created before or after a specific date. Or to get rules modified after a specific date. Click on the:Noise List ConfigID Noise ConfigDuplicate ID ConfigAnd get detailed information about current rules for specific type.NOTE: remember to change entity type and click Search to view rules for different entity types.                                                                                  "
},
{
"title": "Ingestion Services Tester",
"pageID": "302701950",
"pageLink": "/display/GMDM/Ingestion+Services+Tester",
"content": "DescriptionThis site allows you to test quality service. The user can select the input entity using the 'upload' button, paste the content of the entity into the editor or drag it. After clicking the 'test' button, the entity will be sent to the quality service. After processing, the result will appear in the right window. The user can choose two modes of presenting the result - the whole entity or the difference. In the second mode, only changes made by quality service will be displayed. After clicking the 'validation result' button, a dialog box will be displayed with information on which rules were applied during the operation of the service for the selected entity.Quality service tester editorValidation summary                                      Here you can check which rules were "triggered" and check the rule in the Ingestion Services Configuration using the Rule name.Search by text using attribute or "triggered" keyword to get all triggered rules.                                            "
},
{
"title": "Incremantal batch",
"pageID": "164470033",
"pageLink": "/display/GMDM/Incremantal+batch",
"content": "On the diagram below presented the generic structure of the batch flow. Data sources will have own instances of the flow configured:The flow consists of the following stages: Flow triggering is done by Airflow based on a schedule suited to a source data delivery time.  The source data files are downloaded from AWS S3 bucket managed by MMD HUB and they are preprocessed. The preprocessing is done using standard Unix tools run by Aifrlow as docker containers, and it is specific to  particular source requirements. The goal of the stage is preparing data for the mapping stage by cleaning and formatting. Source data are mapped to Reltio data model using Generic Mapper custom Java component that uses flexible mapping rules expressed as metadata configuration. The component produces HCP/HCP/relation update events and publish it to dedicated KAFKA topics. Each flow uses own topic to control access and prevent from uncontrolled data modification in Reltio by a source (Topic name is mapped to client privileges in HUB Gateway). The mapper generates update events in an order that reflects Reltio object dependencies. As first,  Main HCO events are generated, then child HCO events, and at the end HCP events. MDM Gateway receives update events, validates, call respective Reltio API to update profiles in Reltio, and send an acknowledge events (ACK) to a response topic containing statuses of processing update events. The events are processed in parallel. The number of threads depends on the number of Kafka consumers configured in the Gateway. The Generic Mapper component receives ACKs and send events for the next Reltio object,  or if all events are processed than it generates a report from a load. At the end of the process, the input files and the load report are copied to an archive location in S3.  Generic MapperGeneric Mapper is a component that converts source data into documents in the unified format required by Reltio API. The component is flexible enough to support incremental batches as well as full snapshots of data. Handling a new type of data source is a matter of (in most cases) creating a new configuration that consists of stage and metadata parts. The first one defines details of so called "stages", i.e.: HCO, HCP, etc. The latter contains all mapping rules defining how to transform source data into attribute path/value form. Once data are transformed into the mentioned form it is easy to store it, merge it or do any other operation (including Reltio document creation) in the same way for all types of sources. This simple idea makes Generic Mapper a very powerful tool that can be extended in many ways.  A stage is a logical group of steps that as a whole process single type of Reltio document, i.e.: HCO entity.    At the beginning of each stage the component reads source data and generates attribute changes (events) and then stores this in an output file. It is worth to notice that there can be many source data configured. Once the output file is produced it is sorted. The above logic can be called phase 1 of a stage. Until now no database has been used. In the phase 2 the sorted file is read, events are aggregated into groups in such a way that each element of a group refers to the same Reltio document. Next all lookups are resolved against a database, merged with previous version of a document attributes and persisted. Then, Reltio document (Json) is created and sent to Kafka. The stage is finished when all acks from the gateway are collected. Under the hood each stage is a sequence of jobs: a job (i.e.: the one for sorting a file) can be started only in case its direct predecessor is finished with a success. Stages can be configured to run in parallel and depends on each other. Load reports At runtime Generic Mapper collects various types of data that give insight into DAG state and load statistics. The HTML report is written to disk each time a status of any job is changed. The report consists of three panels: Summary, Metrics and DAG. The summary panel contains details of all jobs within a DAG that was created for the current execution (load). The DAG panel shows relationships between jobs in the form of a graph. The metrics panel presents details of a load. Each metric key is prefixed by a stage name.  Document processed or Document sent: number of Reltio documents processed with success. In the latter case the document was additionally sent to MDM Gateway.  Document not sent due to its deleted status: number of documents not processed because of its status marked as deleted (only for initDeletedLoadEnabled set to false, otherwise a document is processed anyway) Document not sent due to lack of delta: number of documents not processed because there was not any change discovered (only for deltaDetectionEnabled set to true, otherwise a document is processed anyway) MDMRequest creation error: number of documents not sent due to a problem with building MDMRequest object. This may happen if source data are not complete, i.e.: only specializations without root object attributes were delivered Lookup error: number of documents not processed due to problems with finding referenced data in a database.  Record filtered out: number of records filtered out during attribute change generation step. By default no record is filtered out, this may be changed via mapping configuration. Invalid record error: number of invalid records "
},
{
"title": "Kafka offset modification",
"pageID": "273695178",
"pageLink": "/display/GMDM/Kafka+offset+modification",
"content": "DescriptionThe REST interfaces exposed through the MDM Manager component used by clients to modify kafka offset.During the update, we will check access to groupId and specyfic topic.Diagram 1 presents flow, and kafka communication during offset modification.The diagrams below present a sequence of steps in processing client calls.Flow diagramStepsThe client sends HTTP request to MDM Manager endpoint.Kong API Gateway receives requests and handles authentication.If the authentication succeeds, the request is forwarded to MDM Manager component.MDM Manager checks user permissions to call kafka offset modification operation and the correctness of the request.If the user's permissions are correct, MDM Manager proceeds with offset modification.Offset modification cases:latest: to latest offsetearliest: to earliest offsetto date: to offset based on specyfied timestamp(Used to retrieve the earliest offset whose timestamp is greater than or equal to the given timestamp in the corresponding partition, timestamp in milliseconds)If You want shift offset for specific message number you can use "shift" attribute and specify positive or negative number of messages to shift (offset is calculated in memory based on "offset + shift" properties)TriggersTrigger actionComponentActionDefault timeREST callManager: POST /kafka/offsetmodify kafka offsetAPI synchronous requests - realtimeRequestResponse{    "groupId": "mdm_test_user_group",    "topic": "amer-dev-in-guest-tests",    "offset": "latest"}{    "values": [        {            "topic": "amer-dev-in-guest-tests",            "partition": 0,            "offset": 2        }    ]}{    "groupId": "mdm_test_user_group",    "topic": "amer-dev-in-guest-tests",    "offset": "earliest"}{    "values": [        {            "topic": "amer-dev-in-guest-tests",            "partition": 0,            "offset": 0        }    ]}{    "groupId": "mdm_test_user_group",    "topic": "amer-dev-in-guest-tests",    "offset": "2022-12-15T08:15:02Z"}{    "values": [        {            "topic": "amer-dev-in-guest-tests",            "partition": 0,            "offset": 1        }    ]}{    "groupId": "mdm_test_user_group",    "topic": "amer-dev-in-guest-tests",    "offset": "latest"    "partition": 4}{    "values": [        {            "topic": "amer-dev-in-guest-tests",            "partition": 4,            "offset": 2        }    ]}{    "groupId": "mdm_test_user_group",    "topic": "amer-dev-in-guest-tests",    "offset": "2022-12-15T08:15:02Z",    "shift": 5}{    "values": [        {            "topic": "amer-dev-in-guest-tests",            "partition": 0,            "offset": 6        }    ]}Dependent componentsComponentUsageManagercreate update Entities in MDM systemsAPI Gatewayproxy REST and secure access"
},
{
"title": "LOV read",
"pageID": "164469998",
"pageLink": "/display/GMDM/LOV+read",
"content": "The flow is triggered by API GET /lookup  call.  It retrives LOV data from HUB store. Process steps description:Client sends HTTP request to MDM Manager endpoint.Kong Gateway receives request and handles authenticationIf the authentication succeeds, the request is forwarded to MDM Manager componentMDM Manager checks user permissions to call getEntity operation and the correctness of the requestMDM Manager checks user profile configuration for lookup operation to determine whether to return results based on MongoDB state, or call Reltio directly.Request parameters are used to dynamically generate a query. This query is executed in findByCriteria method.Query results are returned to the client"
},
{
"title": "LOV update process (Nucleus)",
"pageID": "164469999",
"pageLink": "/pages/viewpage.action?pageId=164469999",
"content": "\nProcess steps description:\n\n\tNucleus Subscriber monitors AWS S3 location where CCV files are uploaded.\n\tWhen a new file is found, it is downloaded and processed. Single CCV zip file contains multiple *.exp files, which contain different parts of LOV header, description, references to values from external systems.\n\tEach *.exp file is processed line by line, with Dictionary change events generated for each line. These events are published to a Kafka topic from where the Event Publisher component receives them.\n\tAfter CCV file is processed completely, it is moved to archive subtree in S3 bucket folder structure.\n\tWhen Dictionary change event is received in Event Publisher the current state of LOV is first fetched from Mongo database. New data from the event is then merged with that state and the result is saved back in Mongo.\n\n\n\nAdditional remarks:\n\n\tCorrectness is ensured by the fact that LOV id is used as Kafka partitioning key, guaranteeing that events related to the same LOV are processed sequentially by the same thread.\n\tDictionary change events are considered internal to MDM Publishing Hub they are not forwarded to client systems subscribing to Entity change events.\n\n"
},
{
"title": "LOV update processes (Reltio)",
"pageID": "164469992",
"pageLink": "/pages/viewpage.action?pageId=164469992",
"content": "\n Figure 18. Updating LOVs from ReltioLOV update processes are triggered by timer on regular, configurable intervals. Their purpose is to synchronize dictionary values from Reltio. Below is the diagram outlining the whole process.\n\nProcess steps description:\n\n\tSynchronization processes are triggered at regular intervals.\n\tReltio Subscriber calls MDM Gateway lookups API to retrieve first batch of LOV data\n\tFetched data is inserted into the Mongo database. Existing records are updated\n\n\n\nSecond and third steps are repeated in a loop until there is no more LOV data remaining."
},
{
"title": "MDM Admin Flows",
"pageID": "302683297",
"pageLink": "/display/GMDM/MDM+Admin+Flows",
"content": ""
},
{
"title": "Kafka Offset",
"pageID": "302684674",
"pageLink": "/display/GMDM/Kafka+Offset",
"content": "Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Kafka/kafkaOffsetModificationAPI allows offset manipulation for consumergroup-topic pair. Offsets can be set to earliest/latest/timestamp, or adjusted (shifted) by a numeric value.An important point to mention is that in many cases offset does not equal to messages - shifting offset on a topic back by 100 may result in receiving 90 extra messages. This is due to compactation and retention - Kafka may mark offset as removed, but it still remains for the sake of continuity.Example 1Environment is EMEA DEV. User wants to consume the last 100 messages from his topic again. He is using topic "emea-dev-out-full-test-topic-1" and consumer-group "emea-dev-consumergroup-1".User has disabled the consumer - Kafka will not allow offset manipulation, if the topic/consumergroup is being used.He sent below request:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/kafka/offset\nBody:\n{\n  "topic": "emea-dev-out-full-test-topic-1",\n  "groupId": "emea-dev-consumergroup-1",\n  "shiftBy": -100\n}\nUpon re-enabling the consumer, 100 of the last events were re-consumed.Example 2User wants to consume all available messages from the topic again.User has disabled the consumer and sent below request:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/kafka/offset\nBody:\n{\n  "topic": "emea-dev-out-full-test-topic-1",\n  "groupId": "emea-dev-consumergroup-1",\n  "offset": earliest\n}\nUpon re-enabling the consumer, all events from the topic were available for consumption again."
},
{
"title": "Partial List",
"pageID": "302683607",
"pageLink": "/display/GMDM/Partial+List",
"content": "Swagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Precallback%20Service/reconcilePartials_1API calls Precallback Service's internal API and returns a list of events stuck in partial state (more information here). List can be limited and sorted. Partial age can be displayed in one of below formats:HH:mm:ss.fff duration(default)YYYY-MM-DDThh:mm:ss.sss timestampepoch timestamp.ExampleUser has noticed an alert being triggered for GBLUS DEV, informing about events in partial state. To investigate the situation, he sends the following request:\nGET https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-dev/precallback/partials?absolute=true\nResponse:\n{\n "entities/1sgqoyCR": "2023-02-09T11:42:06.523Z",\n "entities/1eUqpXVe": "2023-02-01T12:39:57.345Z",\n "entities/2ZlDTE2U": "2023-02-09T11:40:30.950Z",\n "entities/2J1YiLW9": "2023-02-09T11:41:45.092Z",\n "entities/1KgPnkhY": "2023-02-01T12:39:58.594Z",\n "entities/1YpLnUIR": "2023-02-01T12:40:06.661Z"\n}\nHe realized, that it is difficult to quickly tell the age of each partial based on timestamp. He removed the absolute flag from request:\nGET https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gblus-dev/precallback/partials\nResponse:\n{\n "entities/1sgqoyCR": "27:26:56.228",\n "entities/1eUqpXVe": "218:29:05.406",\n "entities/2ZlDTE2U": "27:28:31.801",\n "entities/2J1YiLW9": "27:27:17.659",\n "entities/1KgPnkhY": "218:29:04.157",\n "entities/1YpLnUIR": "218:28:56.090"\n}\nThree partials have been stuck for more than 200 hours. Other three partials - for over 27 hours."
},
{
"title": "Reconciliation",
"pageID": "302683312",
"pageLink": "/display/GMDM/Reconciliation",
"content": "EntitiesSwagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Reconciliation/reconcileEntitiesAPI accepts a JSON list of entity URIs. URIs not beginning with "entities/" are filtered out. For each URI it:Checks entityType (HCP/HCO/MCO) in MongoChecks status (ACTIVE/LOST_MERGE/INACTIVE/REMOVED) in MongoIf entity is ACTIVE, it generates a *_CHANGED event and sends it to the ${env}-internal-reltio-events to be enriched by the Entity EnricherIf entity has status other than ACTIVE:If entity has status LOST_MERGE and pushLostMerge parameter is true, generate a *_LOST_MERGE event.If entity has status INACTIVE and pushInactived parameter is true, generate a *_INACTIVATED event.If entity has status DELETED and pushRemoved parameter is true, generate a *_REMOVED event.*Additional parameter, force, may be used. When set to true, event will proceed to the EventPublisher even if rejected by Precallbacks.ExampleUser wants to reconcile 4 entities, which have different data in Snowflake/Mongo than in Reltio:entities/108dNvgB is ACTIVEentities/10VLBsCl is LOST_MERGEentities/10bH3nze is INACTIVEentities/1065AHEA is DELETEDrelations/101LIzcm was mistakenly added to the listBelow request is sent (GBL DEV):\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/entities\nBody:\n["entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]\nResponse:\n{\n "entities/10bH3nze": "false - Record with INACTIVE status in cache",\n "entities/1065AHEA": "false - Record with DELETED status in cache",\n "entities/10VLBsCl": "false - Record with LOST_MERGE status in cache",\n "entities/108dNvgB": "true",\n "relations/101LIzcm": "false"\n}\nOnly one event was generated: HCP_CHANGED for entities/108dNvgB.User decided that he also need an HCP_LOST_MERGE event for entities/10VLBsCl. He sent the same request with pushLostMerge flag:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/entities?pushLostMerge=true\nBody:\n["entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]\nResponse:\n{\n "entities/10bH3nze": "false - Record with INACTIVE status in cache",\n "entities/1065AHEA": "false - Record with DELETED status in cache",\n "entities/10VLBsCl": "true",\n "entities/108dNvgB": "true",\n "relations/101LIzcm": "false"\n}\nThis time, two events have been generated:HCP_CHANGED for entities/108dNvgBHCP_LOST_MERGE for entities/10VLBsClRelationsSwagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Reconciliation/reconcileRelationsAPI works the same way as for Entities, but this time URIs not beginning with "relations/" are filtered out.ExampleUser sent the same request as in previous example (GBL DEV):\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/relations\nBody:\n["entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]\nResponse:\n{\n "entities/10bH3nze": "false",\n "entities/1065AHEA": "false",\n "entities/10VLBsCl": "false",\n "entities/108dNvgB": "false",\n "relations/101LIzcm": "false - Record with DELETED status in cache"\n}\nFirst 4 URIs have been filtered out due to unexpected prefix. Event for relations/101LIzcm has not been generated, because this relation has DELETED status in cache.Same request has been sent with pushRemoved flag:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/relations?pushRemoved=true\nBody:\n["entities/108dNvgB", "entities/10VLBsCl", "entities/10bH3nze", "entities/1065AHEA", "relations/101LIzcm"]\nResponse:\n{\n "entities/10bH3nze": "false",\n "entities/1065AHEA": "false",\n "entities/10VLBsCl": "false",\n "entities/108dNvgB": "false",\n "relations/101LIzcm": "true"\n}\nA single event has been generated: RELATIONSHIP_REMOVED for relations/101LIzcm.PartialsSwagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Reconciliation/reconcilePartialsPartials Reconciliation API works the same way that Entities Reconciliation does, but it automatically fetches the current list of entities stuck in partial state using Partial List API.Partials Reconciliation API also handles push and force flags. Additionally, partials can be filtered by age, using partialAge parameter with one of following values: NONE (default), MINUTE, HOUR, DAY.ExampleUser wants to reload entities stuck in partial state in GBL DEV. Prometheus alert informs him that there are plenty, but he remembers that there is currently an ongoing data load, which may cause many temporary partials.User decides that he should use the partialAge parameter with value DAY, to only reload the entities which have been stuck for a longer while, and not generate unnecessary additional traffic.He sends the following request:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-gbl-dev/reconciliation/partials?partialAge=DAY\nBody: -\nFlow fetches a full list of partials from Precallback Service API and filters out the ones stuck for less than a day. It then executes the Entities Reconciliation with this list. Response:\n{\n "entities/1yHHKEZ7": "true",\n "entities/2EHamZr3": "true",\n "entities/2EyP0kYM": "true",\n "entities/21QU96KG": "true",\n "entities/2BmHQMCn": "true"\n}\n5 HCP/HCO_CHANGED events have been generated as a result."
},
{
"title": "Resend Events",
"pageID": "302684685",
"pageLink": "/display/GMDM/Resend+Events",
"content": "API triggers an Airflow DAG. The DAG:Runs a query on MongoDB and generates a list of entity/relation URIs.Using Event Publisher's /resendLastEvent API, it produces outbound events for received reconciliationTarget (user-sent).Resend - SimpleSwagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Events/resendEventWhen using Simple API, user does not actually write the Mongo query - they instead fill in the blanks.Required parameters are:country filter,objectType (entity, relation)reconciliationTarget - this is configured for each routing rule in Event Publisher and, according to MDM Hub's support practices, should be equal to topic name,event limit - number.Optionally, objects can be filtered by:source,modification time.ExampleEnvironment is EMEA DEV. User wants to generate 300 entity events (HCP_CHANGED or HCO_CHANGED) for Poland, source CRMMI. His outbound topic is emea-dev-out-full-user-all.He sends the request:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend\nBody:\n{\n "countries": [\n "pl"\n ],\n "sources": [\n "CRMMI"\n ],\n "objectType": "ENTITY",\n "limit": 300,\n "reconciliationTarget": "emea-dev-out-full-user-all"\n}\nResponse:\n{\n "dag_id": "reconciliation_system_emea_dev",\n "dag_run_id": "manual__2023-02-13T14:26:22.283902+00:00",\n "execution_date": "2023-02-13T14:26:22.283902+00:00",\n "state": "queued"\n}\nA new Airflow DAG run was started. dag_run_id field contains this run's unique ID. Below request can be sent to fetch current status of this DAG run:\nGET https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend/status/manual__2023-02-13T14:26:22.283902+00:00\nResponse:\n{\n "dag_id": "reconciliation_system_emea_dev",\n "dag_run_id": "manual__2023-02-13T14:26:22.283902+00:00",\n "execution_date": "2023-02-13T14:26:22.283902+00:00",\n "state": "running"\n}\nAfter the DAG has finished, 300 HCP_CHANGED/HCO_CHANGED events will have been generated to the emea-dev-out-full-user-all topic.Resend - ComplexSwagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Events/resendEventComplexFor Complex API, user writes their own Mongo query.Required parameters are:either entitiesQuery or relationsQuery - depending on object type and collection to be queried,reconciliationTarget.Optionally, resulting objects can be limited (separate fields for each query).ExampleAs in previous example, user wants to generate 300 events for Poland, source CRMMI. Output topic is emea-dev-out-full-user-all.This time, he sends the following request:\nPOST https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/events/resend/complex\nBody:\n{\n "entitiesQuery": "{ 'country': 'pl', 'sources': 'CRMMI' }",\n "relationsQuery": null,\n "reconciliationTarget": "emea-dev-out-full-user-all",\n "limitEntities": 300,\n "limitRelations": null\n}\nResponse:\n{\n "dag_id": "reconciliation_system_emea_dev",\n "dag_run_id": "manual__2023-02-13T14:57:11.543256+00:00",\n "execution_date": "2023-02-13T14:57:11.543256+00:00",\n "state": "queued"\n}\nResend - StatusSwagger: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Events/getStatusAs described in previous examples, this API returns current status of DAG run. Request url parameter must be equal to dag_run_id. Possible statuses are:queuedsuccessrunningfailed"
},
{
"title": "Internals",
"pageID": "164470109",
"pageLink": "/display/GMDM/Internals",
"content": ""
},
{
"title": "Archive",
"pageID": "333152415",
"pageLink": "/display/GMDM/Archive",
"content": ""
},
{
"title": "APM performance tests",
"pageID": "333152417",
"pageLink": "/display/GMDM/APM+performance+tests",
"content": "Performance tests were executed using Jmeter tool placed on CI/CD server.Test scenario:Create HCPSmall entityMedium size entityBig entityGet previously created entityTests werer performed by 4 parallel users  in a loop for 60 min.Test results:The decrease in component efficiency is not more than 3%The increase in the load on the nodes in not more than 5%(within the measurement error)"
},
{
"title": "Client integration specifics",
"pageID": "492493127",
"pageLink": "/display/GMDM/Client+integration+specifics",
"content": ""
},
{
"title": "Saudi Arabia integration with IQVIA",
"pageID": "492493129",
"pageLink": "/display/GMDM/Saudi+Arabia+integration+with+IQVIA",
"content": "Below design was confirmed with Alain and Eleni during 14.01.2025 meeting. Concept of such solution was earlier approved by AJ.Source: Lucid"
},
{
"title": "Components providers - AWS S3, networking, etc...",
"pageID": "273702388",
"pageLink": "/pages/viewpage.action?pageId=273702388",
"content": "TenantProviderReltioAWS accounts IDsIAM usersIAM rolesS3 bucketsNetwork (subnets, VPCe)Application IDEMEA NPRODPDCS - Kubernetes in IoDCOMPANYAirflow (S3) - 211782433747Snowflake (S3) - 211782433747Reltio (S3) -  211782433747AWS (PDCS) - 330470878083Airflow (S3)- arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s3Snowflake (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s3Reltio (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s3Node Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-nprod-emea-eks-worker-NodeInstanceRole-1OG6IFX6DO8B9Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - pfe-atp-eu-w1-nprod-mdmhub Snowflake - pfe-atp-eu-w1-nprod-mdmhubReltio - pfe-atp-eu-w1-nprod-mdmhubVPCvpc-0c55bf38e97950aa5Subnetssubnet-067425933ced0e77f (●●●●●●●●●●●●●●)subnet-0e485098a41ac03ca (●●●●●●●●●●●●●●)SC3028977EMEA PRODAirflow (S3) - 211782433747Snowflake (S3) - 211782433747Reltio (S3) -  211782433747AWS (PDCS) - 330470878083S3 backup bucket - 604526422050Airflow (S3) - arn:aws:iam::211782433747:user/SRVC-MDMCDI-PRODSnowflake (S3) - arn:aws:iam::211782433747:user/SRVC-MDMCDI-PRODReltio (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_mdm_exports_prod_rw_s3Node Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-prod-emea-eks-worker-n-NodeInstanceRole-11OT3ADBULAGCReltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - pfe-atp-eu-w1-prod-mdmhubSnowflake - pfe-atp-eu-w1-prod-mdmhubReltio - pfe-atp-eu-w1-prod-mdmhubBackups - pfe-atp-eu-w1-prod-mdmhub-backupemaasp202207120811VPCvpc-0c55bf38e97950aa5Subnetssubnet-067425933ced0e77f (●●●●●●●●●●●●●●)subnet-0e485098a41ac03ca (●●●●●●●●●●●●●●)SC3211836AMER NPRODPDCS - Kubernetes in IoDCOMPANYAirflow (S3) - 555316523483Snowflake (S3)-  555316523483Reltio (S3) -  555316523483AWS (PDCS) - 330470878083Airflow (S3) - arn:aws:iam::555316523483:user/SRVC-MDMGBLFTSnowflake (S3) - arn:aws:iam::555316523483:user/SRVC-MDMGBLFTReltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPRODNode Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-nprod-amer-eks-worker-NodeInstanceRole-1X8MZ6QZQD5V7Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - gblmdmhubnprodamrasp100762Snowflake - gblmdmhubnprodamrasp100762Reltio - gblmdmhubnprodamrasp100762VPCvpc-0aedf14e7c9f0c024Subnetssubnet-0dec853f7c9e507dd (10.9.0.0/18)subnet-07743203751be58b9 (10.9.64.0/18)SC3028977AMER PRODAirflow (S3) - 604526422050Snowflake (S3)- 604526422050Reltio (S3) -  555316523483AWS (PDCS) - 330470878083Backup bucket (S3) - 604526422050Airflow (S3) - arn:aws:iam::604526422050:user/SRVC-MDMGBLFTSnowflake (S3) - arn:aws:iam::604526422050:user/SRVC-MDMGBLFTReltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPRODNode Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-prod-amer-eks-worker-n-NodeInstanceRole-1KA6LWUDBA3OIReltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - gblmdmhubprodamrasp101478Snowflake - gblmdmhubprodamrasp101478Reltio - gblmdmhubprodamrasp101478Backups - pfe-atp-us-e1-prod-mdmhub-backupamrasp202207120808VPCvpc-0aedf14e7c9f0c024Subnetssubnet-0dec853f7c9e507dd (10.9.0.0/18)subnet-07743203751be58b9 (10.9.64.0/18)SC3211836APAC NPRODPDCS - Kubernetes in IoDCOMPANYAirflow (S3) - 555316523483Snowflake (S3) - 555316523483Reltio (S3) -  555316523483AWS (PDCS) - 3304708780831.Airflow - (S3) - arn:aws:iam::555316523483:user/svc_atp_aps1_mdmetl_nprod_rw_s32. Snowflake (S3) - arn:aws:iam::555316523483:user/svc_atp_aps1_mdmetl_nprod_rw_s33. Reltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPRODNode Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-nprod-apac-eks-worker-NodeInstanceRole-1053BVM6D7I2LReltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - globalmdmnprodaspasp202202171347Snowflake - globalmdmnprodaspasp202202171347Reltio - globalmdmnprodaspasp202202171347VPCvpc-0d4b6d3f77ac3a877Subnetssubnet-018f9a3c441b24c2b (●●●●●●●●●●●●●●●)subnet-06e1183e436d67f29 (●●●●●●●●●●●●●●●)SC3028977APAC PRODAirflow (S3) -Snowflake (S3) - Reltio -  555316523483AWS (PDCS) - 330470878083S3 backup bucket 6045264220501.Airflow - (S3) -  arn:aws:iam::604526422050:user/svc_atp_aps1_mdmetl_prod_rw_s32. Snowflake (S3) - arn:aws:iam::604526422050:user/svc_atp_aps1_mdmetl_prod_rw_s33. Reltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPRODNode Instance Role ARN: arn:aws:iam::330470878083:role/atp-mdmhub-prod-apac-eks-worker-n-NodeInstanceRole-1NMGPUSYG7H8QReltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - globalmdmprodaspasp202202171415Snowflake - globalmdmprodaspasp202202171415Reltio - globalmdmprodaspasp202202171415Backups - pfe-atp-ap-se1-prod-mdmhub-backuaspasp202207141502VPCvpc-0d4b6d3f77ac3a877Subnetssubnet-018f9a3c441b24c2b (●●●●●●●●●●●●●●●)subnet-06e1183e436d67f29 (●●●●●●●●●●●●●●●)SC3211836GBLUS NPRODPDCS - Kubernetes in IoDCOMPANYAirflow (S3) - 555316523483Snowflake (S3) - 555316523483Reltio (S3) -  555316523483AWS (PDCS) - 330470878083Airflow (S3) - arn:aws:iam::555316523483:user/SRVC-MDMGBLFTSnowflake (S3) - arn:aws:iam::555316523483:user/SRVC-MDMGBLFTReltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPRODReltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - gblmdmhubnprodamrasp100762Snowflake - gblmdmhubnprodamrasp100762Reltio - gblmdmhubnprodamrasp100762Same as AMER NPRODSC3028977GBLUS PRODAirflow (S3) - 604526422050Snowflake - 604526422050Reltio (S3) -  AWS (PDCS) - 330470878083S3 backup bucket - 604526422050Airflow (S3) - arn:aws:iam::604526422050:user/SRVC-MDMGBLFTSnowflake (S3) - arn:aws:iam::604526422050:user/SRVC-MDMGBLFTReltio (S3) - arn:aws:iam::555316523483:user/SVRC-MDMRELTIOGBLFTNPRODReltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - gblmdmhubprodamrasp101478Snowflake - gblmdmhubprodamrasp101478Reltio - gblmdmhubprodamrasp101478Backups - pfe-atp-us-e1-prod-mdmhub-backupamrasp202207120808Same as AMER  PRODSC3211836GBL NPRODPDCS - Kubernetes in IoDIQVIAAirflow (S3) -Snowflake (S3) - 211782433747Reltio (S3) -  AWS (PDCS) - 3304708780831.Airflow (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s32. Snowflake (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_nprod_rw_s33. Reltio (S3) - arn:aws:iam::211782433747:user/svc_atp_euw1_mdmhub_mdm_exports_prod_rw_s3Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - pfe-atp-eu-w1-nprod-mdmhubSnowflake - pfe-atp-eu-w1-nprod-mdmhubReltio - pfe-atp-eu-w1-nprod-mdmhubSame as EMEA NPRODSC3028977GBL PRODAirflow (S3) -Snowflake (S3) - 211782433747Reltio (S3) -  AWS (PDCS) - 330470878083S3 backup bucket - 6045264220501.Airflow (S3) - arn:aws:iam::211782433747:user/svc_mdm_project_rw_s32. Snowflake (S3) - arn:aws:iam::211782433747:user/svc_mdm_project_rw_s33. Reltio (S3) - arn:aws:iam::211782433747:user/svc_mdm_project_rw_s3 ???Reltio Export IAM Role: arn:aws:iam::555316523483:role/PFE-CB-PROD-GLOBALMDMHUB-RW-SSOAirflow - pfe-baiaes-eu-w1-projectSnowflake - pfe-baiaes-eu-w1-projectReltio - pfe-baiaes-eu-w1-projectBackups - pfe-atp-eu-w1-prod-mdmhub-backupemaasp202207120811Same as EMEA PRODSC3211836FLEX NPRODCloudBroker - EC2IQVIAAirflow (S3) -Reltio (S3) - Airflow - mdmnprodamrasp22124Reltio - mdmnprodamrasp22124FLEX PRODAirflow (S3) - Reltio (S3) - Airflow - mdmprodamrasp42095Reltio - mdmprodamrasp42095ProxyRapid - EC2N/AAWS EC2 - 432817204314MonitoringCloudBroker - EC2N/AAWS EC2 - 604526422050AWS S3 - 604526422050Thanos (S3) - arn:aws:iam::604526422050:user/SRVC-gblmdmhubNode Instance Role: arn:aws:iam::604526422050:role/PFE-ATP-MDMHUB-MONITORING-BACKUP-ROLE-01Grafana Backup - pfe-atp-us-e1-prod-mdmhub-grafanaamrasp20240315101601Thanos - pfe-atp-us-e1-prod-mdmhub-monitoringamrasp20240208135314Jenkins buildFLEX AirflowCloudBroker - EC2N/AVPC:Jenkins vpc-12aa056a"
},
{
"title": "Configuration",
"pageID": "164470110",
"pageLink": "/display/GMDM/Configuration",
"content": "\nAll runtime configuration is stored in GitHub repository and changes are monitored using GIT history. Sensitive data is encrypted by Ansible Vault using AES256 algorithm and decrypted only during automatic deployment managed by Continuous Delivery process in Jenkins. "
},
{
"title": "●●●●●●●●●●●●● [https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1587199]",
"pageID": "164470111",
"pageLink": "/pages/viewpage.action?pageId=164470111",
"content": "\nConfiguration for all environments is placed in mdm-reltio-handler-env/inventory branch.\nAvailable environments:\n\n\tdev/qa/stage/uat/test\n\t\n\t\t●●●●●●●●●●●●●\n\t\t●●●●●●●●●●●●●\n\t\n\t\n\tprod\n\t\n\t\t●●●●●●●●●●●●●\n\t\t●●●●●●●●●●●●●\n\t\t●●●●●●●●●●●●●\n\t\n\t\n\n\n\nIn order to separate variables for each service, we created the following groups:\n\n\t[gw-services]\n\t[hub-services]\n\t[kong]\n\t[mongo]\n\t[kafka]\n\n"
},
{
"title": "Kafka",
"pageID": "164470104",
"pageLink": "/display/GMDM/Kafka",
"content": "\nKafka deployment procedures\n\n\tinstall_hub_broker.yml this procedure is created to deploy kafka/zookeeper on environments other than PROD.\n\tinstall_hub_broker_cluster.yml this procedure is created to deploy kafka/zookeeper on PROD environment.\n\n\n\nKafka variables\nProduction Kafka cluster requires the following variables:\n\n\tGlobally:\n\t\n\t\thub_broker_truststore_file/password kafka server truststore file name and password\n\t\thub_broker_keystore_file/password kafka keystore file name and password\n\t\thub_broker_admin_user/password kafka admin user name and password\n\t\thub_broker_jaas_config_file kafka jaas config file with Server auth(kafka) and Client auth(zookeeper)\n\t\tkafka_environment_KAFKA_ZOOKEEPER_CONNECT list of zookeeper services required by kafka to enable cluster connection.\n\t\tzoo_users zookeeper is deployed with server auth, this map contains admin user and password.\n\t\tzoo_servers - list of zookeeper servers, each host has to have unique id [1/2/3]\n\t\tkafka_extra_hosts list of kafka hosts, these lines will be added to /etc/hosts file on each kafka docker container\n\t\n\t\n\tVariables per host unique values.\n\t\n\t\tzoo_myid zookeeper server id\n\t\tkafka_environment_KAFKA_BROKER_ID kafka broker id\n\t\tkafka_environment_KAFKA_ADVERTISED_PORT kafka advertised port\n\t\tkafka_environment_KAFKA_ADVERTISED_HOST_NAME kafka host name\n\t\tfirewalld_ports kafka ports to open in firewalld service.\n\t\n\t\n\tDevelopment kafka instance requires the following variables:\n\t\n\t\thub_broker_truststore_file/password kafka server truststore file name and password\n\t\thub_broker_keystore_file/password kafka keystore file name and password\n\t\thub_broker_admin_user/password kafka admin user name and password\n\t\thub_broker_jaas_config_file kafka jaas config file with Server auth(kafka) and Client auth(zookeeper)\n\t\n\t\n\tAdditionally:\n\t\n\t\ttopics.yml definitions of kafka topics\n\t\tusers.yml definitions of kafka users\n\t\n\t\n\n"
},
{
"title": "Kong",
"pageID": "164470105",
"pageLink": "/display/GMDM/Kong",
"content": "\nKong deployment procedures\n\n\tinstall_mdmgw_gateway.yml this procedure is created to deploy kong/cassandra on all available environments.\n\tupdate_kong_api.yml this procedure is created to manage kong api. Available kong components which can be managed are:\n\t\n\t\tconsumers\n\t\tapis\n\t\tcertificates\n\t\n\t\n\n\n\nKong variables\nCassandra memory parameters are controlled by:\n\n\tkong_database_max_heap_size: "512M" overwrites Xms and Xmx parameters.\n\tkong_database_heap_newsize: "400M" overwrites Xmn parameters\n\n\n\nKong required variables:\n\n\tinstall_base_dir kong docker-compose.yml file deployment directory\n\tkong_cluster_main_host this parameter defines if kong and Cassandra will be deployed in cluster mode. This parameter is declared on PROD environment and contains main CASSANDRA_BROADCAST_ADDRESS. On DEV environment this parameter is not defined.\n\n\n\nTo manage kong api through deployment procedure these maps are needed:\n\n\tkong_apis defines kong apis. It is a list of kong apis with required parameters:\n\t\n\t\tkong_api_obj_name kong api name (e.g. "gw-api")\n\t\tkong_api_obj_upstream_url api upstream url (e.g. http://mdmgw_mdm-manager_1:8081)\n\t\tkong_api_obj_uris api uri (eg. /gw-api)\n\t\tkong_api_obj_methods api methods (e.g. GET/POST/PATH)\n\t\tkong_api_obj_plugins (required plugin is key-auth)\n\t\n\t\n\tkong_consumers defines kong consumers. It is a list of kong consumers with required parameters:\n\t\n\t\tkong_consumer_obj_username kong user name\n\t\tkong_consumer_obj_auth_creds kong required credentials "key-auth"\n\t\t\n\t\t\tkey dedicated key for kong user\n\t\t\n\t\t\n\t\n\t\n\t[optional] kong_certificates - defines kong certificates to enable ssl communication. It is a list of kong snis with key and cert files:\n\t\n\t\tkong_certificate_obj_snis list of available snis\n\t\tkong_certificate_obj_cert kong certificate file\n\t\tkong_certificate_obj_key kong server key file\n\t\n\t\n\n"
},
{
"title": "Mongo",
"pageID": "164470004",
"pageLink": "/display/GMDM/Mongo",
"content": "\nMongo deployment procedures\n\n\tinstall_hub_db.yml this procedure is created to deploy mongo on environments other than PROD.\n\tinstall_hub_mongo_cluster.yml this procedure is created to deploy mongo cluster on PROD environment\n\n\n\nMongo variables\nProduction mongo cluster requires the following variables declared in /inventory/prod/group_vars/ all/all.yml file:\n\n\tmdm_mongo_base_dir mongo base directory where shards/configs/routers will be deployed.\n\tmongo_first_run [True/False] - switch this variable to True when there is the first deployment of mongo cluster.\n\trecreate_services [True/False] - if True all docker-compose files will be started with "up -d" parameter, which means all mongo services will be recreated. Run with True when there is a need to add new shard instance.\n\tregenerate_firewalld_config [True/False] - if True, all ports defined in "mongo_cluster" map will be added to firewall service.\n\tmongo_cluster describes whole mongo cluster. On production environment there are 3 mongo instances:\n\t\n\t\tmongo_server_01 - each instance can define mongo shards/configs/routers with required variables: [id, instance_name, port, host]\n\t\tmongo_server_02\n\t\tmongo_server_03\n\t\n\t\n\n\n\nDevelopment mongo instance requires the following variables declared in /inventory/dev/group_vars/all/all.yml file:\n\n\thub_db_install_dir mongo base directory\n\thub_db_name mongo db XXXeltio db name\n\thub_db_user mongo db XXXeltio user name\n\n"
},
{
"title": "Services - hub_gateway",
"pageID": "164470005",
"pageLink": "/display/GMDM/Services+-+hub_gateway",
"content": "\nServices deployment procedures\nHub deployment procedure: \n\n\tinstall_mdmhub_services.yml\n\n\n\n \nGateway deployment procedure:\n\n\tinstall_mdmgw_services.yml\n\n\n\nServices variables\n[gw-services] - this group contains variables for map channel and mdm manager in the following two maps:\n\n\tmap_channel\n\tmdm_manager\n\n\n\n[hub-services] - this group contains variables for hub api, reltio subscriber and event publisher in the following maps:\n\n\tevent_publisher\n\thub_api\n\treltio_subscriber\n\n\n\nIt is possible to redefine JVM_OPTS or any other environment using these maps:\n\n\tmdm_manager_environments\n\t\n\t\te.g. "JVM_OPTS=-server -Xms128m -Xmx512m -Djava.security.auth.login.confi g=/opt/mdm-gw-manager/config/kafka_jaas.conf"\n\t\n\t\n\tmap_channel_environments\n\tconsole_environments\n\n"
},
{
"title": "Data storage",
"pageID": "164470006",
"pageLink": "/display/GMDM/Data+storage",
"content": "\nPublishing Hub among other functions serves as data store, caching the latest state of each Entity fetched from Reltio MDM. This allows clients to take advantage of increased performance and high availability provided by MongoDB NoSQL database. "
},
{
"title": "Data structures",
"pageID": "164470007",
"pageLink": "/display/GMDM/Data+structures",
"content": "\n Figure 21. Structure of Publishing HUB's databasesThe following diagram shows the structure of DB collections used by Publishing Hub.\n\nDetailed description:\n\n\tentityHistory collection storing MDM Entities (HCP, HCO), along with some metadata for easier lookup/processing.\n\t\n\t\t_id unique id of an Entity. Publishing Hub is reusing attribute "uri" from Reltio model (e.g. "entities/ipa1iKq")\n\t\tcountry two-letter country code, in lowercase (e.g. "de")\n\t\tcreationDate timestamp of record creation (i.e. insertion to Mongo)\n\t\tentity the Reltio Entity\n\t\tentityType type of the entity (e.g. "configuration/entityTypes/HCO")\n\t\tlastModificationDate timestamp of last update of the record.\n\t\tmergedEntitiesUris identifiers of child (merged) entities (for entities that "won" merge event in Reltio)\n\t\tparentEntityId identifier of the parent entity (for entities in "LOST_MERGE" status)\n\t\tsources array of source system codes (e.g. "OK", "GRV", "FACE")\n\t\tstatus current status of the entity (one of: ACTIVE, DELETED, LOST_MERGE)\n\t\tmdmSource name of the source MDM system, currently one of "RELTIO", "NUCLEUS"\n\t\n\t\n\tLookupValues collection storing dictionary data from Reltio.\n\t\n\t\t_id unique id of the record. This is generated as concatenation of "type" and "code" attributes from Reltio\n\t\tupdatedOn timestamp of last update of the record in Mongo\n\t\tvalueUpdatedOn timestamp of last update of LOV in Reltio (values in Mongo are updated every 24h, whether or not they are actually changed in Reltio, so this value represents the timestamp of actual data change, not timestamp of refresh action)\n\t\ttype LookupValue type, as defined by Reltio, e.g. "configuration/lookupTypes/ IMS_LKUP_SPECIALTY"\n\t\tcode LookupValue code, as defined by Reltio, e.g. SPEC\n\t\tcountries list of countries this LookupValue is valid for\n\t\tmdmSource name of the source MDM system, currently one of "RELTIO", "NUCLEUS"\n\t\tvalue LookupValue (full JSON, in Reltio-defined format even for Nucleus data)\n\t\n\t\n\n\n\nINSERT vs UPSERT\nTo speed up database operations Publishing Hub takes advantage of MongoDB "upsert" flag of db.collection.update() method. This allows the application to skip the potentially costly query checking if the entity already exists in database. Instead the update operation is call right away, ceding the responsibility of checking for entity existence on Mongo internal mechanisms."
},
{
"title": "Indexes",
"pageID": "164470001",
"pageLink": "/display/GMDM/Indexes",
"content": "\nAll of the fields in database collections are indexed, except complex documents (i.e. "entity" in entityHistory, "value" in LookupValues). Queries that do not use indexes (for example querying arbitrarily nested attributes of "entity") might suffer from bad performance. "
},
{
"title": "DoR, AC, DoD",
"pageID": "294674667",
"pageLink": "/display/GMDM/DoR%2C+AC%2C+DoD",
"content": ""
},
{
"title": "DoD - template",
"pageID": "294674670",
"pageLink": "/display/GMDM/DoD+-+template",
"content": "Requirements of task needed to be met before closing:Ticket deployed to dev and qa environmentChange is documentedAC are met."
},
{
"title": "DoR - template",
"pageID": "294674659",
"pageLink": "/display/GMDM/DoR+-+template",
"content": "Requirements of task needed to be met before pushing to the Sprint:Fields in Jira ticket are filledFix versionEpic LinkComponent/sBusiness value is known and included in a ticket descriptionIf there is a deadline, it is understood and included in a ticket descriptionAcceptance Criteria are includedA ticket is estimated in Story Points."
},
{
"title": "Exponential Back Off",
"pageID": "164469928",
"pageLink": "/display/GMDM/Exponential+Back+Off",
"content": "BackOff mechanizm that increases the back off period for each retry attempt. When the interval has reached the max interval, it is no longer increased. Stops retrying once the max elapsed time has been reached.Example: The default interval is 2000L ms, the default multiplier is 1.5, and the default max interval is 30000L. For 10 attempts the sequence will be as follows:requestback off ms120002300034500467505101256151877227808300009300001030000Note that the default max elapsed time is Long.MAX_VALUE. Use setMaxElapsedTime(long) to limit the maximum length of time that an instance should accumulate before returning BackOffExecution.STOP.Implementation based on spring-retry library."
},
{
"title": "HUB UI",
"pageID": "294675912",
"pageLink": "/display/GMDM/HUB+UI",
"content": "DRAFT:TODO: Grafana dashboards through iframe - https://www.itpanther.com/embedding-grafana-in-iframe/"
},
{
"title": "Integration Tests",
"pageID": "302681782",
"pageLink": "/display/GMDM/Integration+Tests",
"content": "Integration tests are devided into different categories. These categories are used for different environments.Jenkins IT configuration: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/jenkins/k8s_int_test.groovy"
},
{
"title": "Common Integration Test",
"pageID": "302681798",
"pageLink": "/display/GMDM/Common+Integration+Test",
"content": "Test classTest caseFlowCommonGetEntityTeststestGetEntityByUriCreate HCPGet HCP by URI and validatetestSearchEntityCreate HCPGet entities using filter (get by country code, first name and last name)Validate if entity existstestGetEntityByCrosswalkCreate HCPGet entity by corsswalk and validate if existstestGetEntitiesByUrisCreate HCPGet entity by uris andvalidate if existstestGetEntityCountryCreate HCPGet entity by country and validate if existstestGetEntityCountryOvCreate HCPAdd new countrySend update requestGet HCP's Country and validateMake ignored = true and ov = false on all countriesSend update requestGet HCP's Country and validateCreateHCPTestcreateHCPTestCreate HCPGet entity and validateCreateRelationTestcreateRelationTestCreate HCPCreate HCOCreate Relation between HCP and HCOGet Relation and validateDeleteCrosswalkTestdeleteCrosswalkTestCreate HCODelete crosswalk and validate status responseUpdateHCOTestupdateHCPTestCreate HCOGet created HCOUpdate HCO's nameValidate response statusGet HCO and validate if it is updatedUpdateHCPUsingReltioContributorProviderupdateHCPUsingReltioContributorProviderTrueAndDataProviderFalseCreate HCPGet created HCP and validateUpdate existing corosswalk and set contributorProvider to falseAdd new contributor provider crosswalkUpdate first nameSend update HCP requestValidate if it is updatedPublishingEventTesttest1_hcpCreate HCPWait for HCP_CREATED eventUpdate HCP first nameWait for HCP_CHANGED eventGet entity and validatetest2_hcpCreate HCPWait for HCP_CREATED eventUpdate HCP's last nameWait for HCP_CHANGED eventDelete crosswalkWait for HCP_REMOVED eventtest3_hcoCreate HCOWait for HCO_CREATED eventUpdate HCO's nameWait for HCO_CHANGED eventDelete crosswalkWait for HCO_REMOVED event"
},
{
"title": "Integration Test For Iqvia Model",
"pageID": "302681788",
"pageLink": "/display/GMDM/Integration+Test+For+Iqvia+Model",
"content": "Test classTest caseFlowCRUDHCOAsynctestSend HCORequest to Kafka topicWait for created event and validateUpdate HCO's name and send HCORequest to Kafka topicWait for updated event and validateRemove entitiesCRUDHCOAsyncComplextestCreate Source HCOSend HCORequest with Source HCO to Kafka TopicWait for created event and validateCreate Source Department HCO - set Source HCO as Main HCOSend HCORequest with Source Department HCOWait for event and validateRemove entitiesCRUDHCPAsynctestSend HCPRequest to Kafka topicWait for created event and validateUpdate HCP's Last Name and send HCORequest to Kafka topicWait for updated event and validateRemove entitiesCRUDPostBulkAsynctestHCOSend EntitiesUpdateRequest with multiple HCO entities to Kafka topicWait for entities-create event with specific correlactionId headerValidate message payload and check if all entities are createdRemove entitiestestHCPSend EntitiesUpdateRequest with multiple HCP entities to Kafka topicWait for entities-create event with specific correlactionId headerValidate message payload and check if all entities are createdRemove entitiestestHCPRejectedSend EntitiesUpdateRequest with multiple incorrect HCP entities to Kafka topicWait for event with specific correlactionId headerCheck if all entities have ValidatioError and status is failedCreateRelationAsynctestCreateCreate HCOCreate HCPSend RelationRequest with Relation Activity between HCP and HCO to Kafka topicWait for event with specific correlactionId header and validate statustestCreateRelationsCreate HCOCreate HCP_1Create HCP_2 and validate responseCreate HCP_3 and validate responseCreate HCP_4 and validate responseCreate Activity Relations between HCP_1 → HCO, HCP_2 → HCO, HCP_3 → HCO, HCP_4 → HCOSend RelationRequest event with all relations to Kafka topicWait for event with specific correlactionId header and validate statusRemove entitiestestCraeteWithAddressCopyCreate HCOCreate HCPCreate Activity Relation between HCP and HCOSend RelationRequest event to Kafka topic with param copyAddressFromTarget = trueWait for event with specific correlactionId header and validate status is createdGet HCP and HCOValidate updated HCP - check if address exists and contains HcoName attributeRemove entitiestestDeactivateRelationCreate HCOCreate HCPCreate Activity Relation between HCP and HCO with PrimaryAffiliationIndicator = trueSend RelationRequest event to Kafka topicWait for event with specific correlactionId header and validate status is createdUpdate Relation - set delete date on nowSend RelationRequest event to Kafka topicWait for event with specific correlactionId header and validate status is deletedRemove entitiesHCOAsyncErrorsTestCasetestSend HCORequest to Kafka topic - create HCO with incorrect valuesWait for event with specific correlactionId header and validate status is failedHCPAsyncErrorsTestCasetestSend HCPRequest to Kafka topic - create HCP without permissionsWait for event with specific correlactionId header and validate status is failedUpdateRelationAsynctestCreate HCO and validate status createdCreate HCP with affiliatedHCO and validate status createdGet HCP and check if Workplace relation existsGet existing RelationPatch Relation - update ActEmail.Email attribute and validate if status is updatedGet Relation and validate if ActEmail list size is 1Add Country attribute to RelationSend RelationRequest event to Kafka topic with updated RelationWait for event with specific correlactionId header and validate status is updatedGet Relation and check if ActEmail and Country existAdd AffiliationStatus attribute to RelationSend RelationRequest event to Kafka topic with updated RelationWait for event with specific correlactionId header and validate status is updatedGet Relation and check if ActEmail, Country and  AffiliationStatus  existRemove entitiesBundlingTesttestSend multiple HCORequests to Kafka topic - create HCOsFor each request wait for event with status created and collect HCO's uriCheck if number of requests equals number of recived eventsSend multiple HCPRequests to Kafka topic - create HCPsFor each request wait for event with status created and collect HCP's uriCheck if number of requests equals number of recived eventsSend multiple RelationRequests to Kafka topic - create RelationFor each request wait for event with status created and collect Relation's uriCheck if number of requests equals number of recived eventsSet delete date on now for every HCOSend multiple HCORequests to Kafka topicFor each request wait for event with status deletedSet delete date on now for every HCPSend multiple HCPRequests to Kafka topicFor each request wait for event with status deletedDCRResponseTestcreateAndAcceptDCRThenTryToAcceptAgainTestCreate Hopsital HCOCreate Department HCOSet Hospital HCO as Department's Main HCOCreate HCP with Affiliated HCO as DepartmentCheck if DCR is createdAccept DCR and check if response is OKAccept DCR again and check if response is BAD_REQUESTRemove entitiescreateAndPartialAcceptThenConfirmNoLoopCreate Hopsital HCOCreate Department HCOSet Hospital HCO as Department's Main HCOCreate HCP with Affiliated HCO as DepartmentCheck if DCR is createdPartial accept DCR and check if response is OKGet HCP entity and check if ValidationStatus attribute is "partialValidated"Check if DCR is not created - confirms that DCR creation does not loopRemove entitiescreateAndRejectDCRThenTryToRejectAgainTestCreate Hopsital HCOCreate Department HCOSet Hospital HCO as Department's Main HCOCreate HCP with Affiliated HCO as DepartmentCheck if DCR is createdReject DCR and check if response is OKReject again DCR and check if response is BAD_REQUESTRemove entitiesDeriveHCPAddressesTestCasederivedHCPAddressesTestCreate HCP and validate responseCreate HCO Department with 1 Address and validate responseCreate HCO Hospital with 2 Addresses and validate responseCreate "Activity" Relation HCP → HCO Department and validate responseCreate "Has Health Care Role" Relation HCP → HCO Hospital and validate responseGet HCP and check if contains Hospital's AddressesUpdate HCO Hospital Address and validate responseGet HCP and check if contains updated Hospital's AddressesRemove HCO Hospital Address and validate responseGet HCP and check if contains Hospital's Addresses (without removed)Remove "Has Health Care Role" Relation HCP → HCO Hospital and validate responseGet HCP and check if Addresses are removedRemove entitiesEVRDCRUpdateHCPLUDTestCasetestCreate Hopsital HCOCreate Department HCOSet Hospital HCO as Department's Main HCOCreate HCP with Affiliated HCO as DepartmentGet Change requests and check that DCR was createdUpdate HCPValidationStatus = notvalidatedchange existing GRV crosswalk - set DataProvider = trueadd DCR crosswalk - EVR set ContributorProvider = trueadd another EVR crosswalk set DataProvider = trueSend update request and vadiate responseUpdate HCP (partial update)ValidationStatus = validatedRemove First and Last NameRemove crosswalksSend update request and validate responseGet HCP and validateCheck if the ValidationStatus & LUD (updateDate/singleAttributeUpdateDate) were refreshedRemove crosswalksExistingDepartmentAndHCPTestCasecreateHCP_HCPNotInPendingStatus_NoDCRCreate Hospital HCOCreate Department HCO with Hospital HCO as MainHCOCreate HCP with affiliated HCO (Department HCO) and ValidationStatus = validatedGet HCP and validate attributesGet Change requests and check if the list is emptyRemove crosswalkscreateHCP_HCPIsInPendingStatus_HCPDCRCreatedCreate Hospital HCOCreate Department HCO with Hospital HCO as MainHCOCreate HCP with affiliated HCO (Department HCO) and ValidationStatus = pendingGet HCP and validate attributesGet Change requests and check if there is one NEW_HCP change requestRemove crosswalkscreateHCP_HCPHasTwoWorkplaces_HCPAndWorkplaceDCRCreatedCreate Hospital HCOCreate Department1 HCO with Hospital HCO as MainHCOCreate Department2 HCO with Hospital HCO as MainHCOCreate HCP with affiliated HCO (Department1 HCO) and ValidationStatus = pendingGet HCP and validate attributeshas only one Workplace (Department1 HCO)Update HCP with affiliated HCO (Department2 HCO) and ValidationStatus = pendingGet HCP and validate attributeshas only one Workplace (Department2 HCO)Get Change requests and check if there is one NEW_HCP change requestRemove crosswalksNewHCODCRTestCasescreateHCP_DepartmentDoesNotExist_HCOL1DCRCreate Hospital HCOCreate Department HCO with Hospital HCO as MainHCOCreate HCP with affiliated HCO (Department HCO)Get HCP and validate attributesValidate Workplace and MainWorkplaceGet Change requests and check if the list is emptyRemove crosswalkscreateHCP_HospitalAndDepartmentDoesNotExist_HCOL1DCRCreate Department HCO with Hospital HCO (not created yet) as MainHCOCreate HCP with affiliated HCO (Department HCO) and ValidationStatus = pendingGet HCP and validate attributesGet HCO Department and validate attributesGet Change requests and check if there is one NEW_HCO_L2 change requestRemove crosswalksNewHCPDCRTestCasecreateHCPTestCreate HCO HospitalCreate HCO DepartmentCreate HCP with affiliated HCO (Department HCO)Get HCP and validate Workplace and MainWorkplaceRemove crosswalkscreateHCPPendingTestCreate HCO HospitalCreate HCO DepartmentCreate HCP with affiliated HCO (Department HCO) and ValidationStatus = pendingValidate HCP responseValidate if DCR is createdRemove crosswalkscreateHCPNotValidatedTestCreate HCO HospitalCreate HCO DepartmentCreate HCP with affiliated HCO (Department HCO) and ValidationStatus = notvalidatedValidate HCP responseValidate if DCR is createdRemove crosswalkscreateHCPNotValidatedMergedIntoNotValidatedTestCreate HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)Create HCO HospitalCreate HCO DepartmentCreate HCP_2 with affiliated HCO (Department HCO) and ValidationStatus = notvalidatedValidate HCP responseValidate if DCR is not createdRemove crosswalkscreateHCPPendingMergedIntoNotValidatedTestCreate HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)Create HCO HospitalCreate HCO DepartmentCreate HCP_2 with affiliated HCO (Department HCO) and ValidationStatus = pendingValidate HCP responseValidate if DCR is createdRemove crosswalkscreateHCPPendingMergedIntoNotValidatedWithAnotherGRVNotValidatedTestCreate HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)Create HCO HospitalCreate HCP_2 with ValidationStatus = notvalidated (Merge loser HCP)Create HCO DepartmentCreate HCP_3 with affiliated HCO (Department HCO) and ValidationStatus = pendingValidate if DCR is createdRemove crosswalkscreateHCPNotValidatedMergedIntoNotValidatedWithAnotherGRVNotValidatedTestCreate HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)Create HCO HospitalCreate HCP_2 with ValidationStatus = notvalidated (Merge loser HCP)Create HCO DepartmentCreate HCP_3 with affiliated HCO (Department HCO) and ValidationStatus = notvalidatedValidate if DCR is not createdRemove crosswalkscreateHCPPendingMergedIntoNotValidatedWithGRVAsUpdateTestCreate HCP_1 with ValidationStatus = notvalidated (Merge winner HCP)Create HCO HospitalCreate HCP_2 with ValidationStatus = notvalidated (Merge loser HCP)Create HCO DepartmentCreate HCP_3 with affiliated HCO (Department HCO) and ValidationStatus = notvalidatedGet HCP and validate corsswalk GRV count == 3Validate if DCR is not createdUpdate HCP_3 set code = pendingValidate if DCR is createdRemove crosswalksPfDataChangeRequestLiveCycleTesttestCreate HCO HospitalCreate HCO Department with parent HCO HospitalCreate HCP with affiliated HCO (Department HCO) and ValidationStatus = pendingCheck if DCR existCheck if PfDataChangeRequest existAccpet DCRCheck that HCP ValidationStatus == validatedCheck that PfDataChangeRequest is closedRemove crosswalksResponseInfoTestTestCreate HCO HospitalCreate HCO Department with parent HCO HospitalCreate HCP_1 with affiliated HCO (Department HCO) and ValidationStatus = pendingCreate HCP_2 with affiliated HCO (Department HCO) and ValidationStatus = pendingCheck that DCR_1 existCheck that DCR_2 existCheck that PfDataChangeRequest existRespond for DCR_1 - update HCP with merged urischange First Nameset ValidationStatus = validatedGet HCP and check if ValidationStatus is validatedCheck if PfDataChangeRequest is closed and validate ResponseInfoRespond for DCR_2 - accept and validate messageCheck if PfDataChangeRequest is closed and validate ResponseInfoCheck that DCR_2 does not existRemove crosswalksRevalidateNewHCPDCRTestCasetestCreate Parent HCO and validate responseCreate Department HCO with Parent HCO and validate responseCreate HCP with affiliated HCO (Department HCO), ValidationStatus = pending and validate responseCheck that DCR existCheck that PfDataChangeRequest existRespond to DCR - acceptCheck that HCP has ValidationStatus = validatedSend revalidate event to Kafka topicCheck that new DCR was createdChecking that previous PfDataChangeRequest has ResponseStatus=acceptCheck that new PfDataChangeRequest existCheck that HCP has ValidationStatus = pendingRemove crosswalksStandarNonExistingDepartmentTestCasecreateNewHCPTestCreate Hospital HCOCreate HCP with a new affiliated HCO (Department HCO with Hospital HCO as MainHCO)Get HCP and validate attributes (Workplace and MainWorkplace)UpdateHCPPhonestestCreate HCP and validate responseUpdate Phone and send patchHCP requestValidate response status is OKRemove crosswalksGetEntityTeststestGetEntityByUriCreate HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)Get HCP by uri and validate attributesRemove crosswalkstestSearchEntityCreate HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)Get entites using filter - HCP by country, first name and last nameValidate if entity existsRemove crosswalkstestSearchEntityWithoutCountryFilterCreate HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)Get by corsswalk HCO_1 and check if existsGet by corsswalk HCO_2 and check if existsGet entites using filter - HCO by country and (HCO_1 name or HCO_2 name)Validate if both HCO existsRemove crosswalkstestGetEntityByCrosswalkCreate HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)Get HCP by crosswalkValidate if HCP existsRemove crosswalkstestGetEntitiesByUrisCreate HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)Get HCP by uriValidate if HCP existsRemove crosswalkstestGetEntityCountryCreate HCP with ValidationStatus = validated and affiliatedHcos (HCO_1, HCO_2)Get HCP's countryValidate reponseRemove crosswalkstestGetEntityCountryOvCreate HCP with ValidationStatus = validated, affiliatedHcos (HCO_1, HCO_2) and Country = BrazilUpdate HCPupdate existing crosswalk - set ContributorProvider = trueadd new crosswalk as DataProviderset Country ignored = trueupdate Country - set to ChinaGet HCP's Country and validatecheck value == BR-Brazilcheck ov == trueUpdate HCP - make ignored=true, ov=false on all countriesGet HCP's Country and validatelookupCode == BRRemove crosswalksMergeUnmergeHCPTestcreateHCP1andHCP2_checkMerge_checkUnmerge_APICreate HCP_1 and validate responseCreate HCP_2 and validate responseMerge HCP_1 with HCP_2Get HCP_1 after merge and validate attributesGet HCP_2 after merge and validate attributesUnmerge HCP_1 and HCP_2Get HCP_1 after unmerge and validate attributesGet HCP_2 after unmerge and validate attributesUnmerge HCP_1 and HCP_2 - validate if response code is BAD_REQUESTMerge HCP_1 and NOT_EXISTING_URI - validate if response code is NOT_FOUNDRemove crosswalksHCPMatcherTestCasetestPositiveMatchCreate 2 the same HCP objectsCheck that objects matchtestNegativeMatchCreate 2 different HCP objectsCheck that objects do not matchGetEntitiesTesttestGetHCPsGet entities with filter: country = BR and entityType = HCPValidate responseAll entites are HCPAt least one entity has WorkplacetestGetHCOsGet entities with filter: country = BR and entityType = HCOValidate responseAll entites are HCOGetEntityUSTestcreateHCPTestCreate HCP and validate responseGet HCP and check if existsRemove crosswalks"
},
{
"title": "Integration Test For COMPANY Model",
"pageID": "302681792",
"pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model",
"content": "Test classTest caseFlowAttributeSetterTestTestAttributeSetterCreate HCP with TypeCode attributeGet entity and validate if has autofilled attributesUpdate TypeCode field: send "None" as attribute valueUpdate HCP requestGet entity and validate autofileld attributes by DQ rulesUpdate TypeCode fieldUpdate HCP requestGet entity and validate autofileld attributes by DQ rulesUpdate TypeCode fieldUpdate HCP requestGet entity and validate autofilled NON-HCP valueSet HCP's crosswalk delete dateUpdate and validate if delete date has been setBatchControllerTestmanageBatchInstance_checkPermissionsWithLimitationCreate batch instanceCreate batch stageValidate response code: 403 and message: Cannot access the processor which has been protectedGet batch instance with incorrect nameValidate response code: 403 and message: Batch 'testBatchNotAdded' is not allowed. Update batch stage with existing stage nameUpdate batch stage with limited userValidate response code: 403 and message: Stage '' is not allowed.Update batch stage with not authorized stage nameValidate response code: 403 and message: Stage '' passed in Body is not allowed.createBatchInstanceCreate batch instance and validateComplete stage 1 and start stage 2Validate stagesComplete stage 2Start stage 3Validate all 3 stagesComplete stage 3 and finish batchGet batch instance and validateTestBatchBundlingErrorQueueTesttestBatchWorkflowTestCreate batch instanceGet errors and check if there is no errorsCreate batch stage: HCO_LOADINGCreate batch stage: HCP_LOADINGCreate batch stage: RELATION_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend relations to RELATION_LOADING stageFinish RELATION_LOADING stageCheck sender job status - validate if all relations were sent to ReltioCheck processing job status - validate if all relatons were processedGet batch instance and validate completion statusValidate expected errorsResubmit errorsValidate expected errorsValidate if all errors were resubmitedTestBatchBundlingTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCO_LOADINGCreate batch stage: HCP_LOADINGCreate batch stage: RELATION_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend relations to RELATION_LOADING stageFinish RELATION_LOADING stageCheck sender job status - validate if all relations were sent to ReltioCheck processing job status - validate if all relatons were processedGet batch instance and validate completion statusGet Relations by crosswalk and validateTestBatchHCOBulkTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validateTestBatchHCOTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate created statustestBatchWorkflowTest_CheckFAILonLoadJobCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageUpdate batch stage status: FAILEDGet batch instance and validatetestBatchWorkflowTest_SendEntities_Update_and_MD5SkipCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageGet batch instance and validate completion statusGet entities by crosswalk and validate create statusCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stage (skip 2 entities - MD5 check sum changed)Finish HCO_LOADING stageGet batch instance and validate completion statusGet entities by crosswalk and validate update statustestBatchWorkflowTest_SendEntities_Update_and_DeletesProcessingCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedCheck deleting job status - validate if all entities were sendCheck deleting processing job - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate delete status-- second runCreate batch instanceCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stage (skip 2 entities - delete in post processing)Finish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedCheck deleting job status - validate if all entities were sendCheck deleting processing job - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate delete status-- third runCreate batch instance for checking activationCreate batch stage: HCO_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedCheck deleting job status - validate if all entities were sendCheck deleting processing job - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate delete statusTestBatchHCPErrorQueueTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCP_LOADINGGet errors and check if there is no errorsSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedGet errors and validate if exists exceptedResubmit errorsGet errors and validate if all were resubmitedTestBatchHCPPartialOverwriteTesttestBatchWorkflowTestCreate HCPCreate batch instanceCreate batch stage: HCP_LOADINGSend entites to HCP_LOADING stage with update last nameFinish HCP_LOADING stageCheck sender job status - validate if all entities are created in mongoCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validateTestBatchHCPSoftDependentTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCP_LOADINGCheck Sender job status - SOFT DEPENDENT Send entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate created statusTestBatchHCPTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCP_LOADINGSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate created statusTestBatchMergeTesttestBatchWorkflowTestCreate 4 x HCP and validate respons statusGet entities and validate if are createdCreate batch instanceCreate batch stage: MERGE_ENTITIES_LOADINGSend merge entities objects (Reltio, Onekey)Finish MERGE_ENTITIES_LOADING stageCheck sender job status - validate if all tags are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if tags are visible in Reltio)Create batch instanceCreate batch stage: MERGE_ENTITIES_LOADINGSend unmerge entities objects (Reltio, Onekey)Finish MERGE_ENTITIES_LOADING stageCheck sender job status - validate if all tags are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusTestBatchPatchHCPPartialOverwriteTestCreate batch instanceCreate batch stage: HCP_LOADINGCreate HCP entity with crosswalk's delete date set on nowSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate created statusCreate batch instanceCreate batch stage: HCP_LOADINGSend entites PATCH to HCP_LOADING stage with empty crosswalk's delete date and missing first and last nameFinish HCP_LOADING stageCheck sender job status - validate if all entities are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities by crosswalk and validate if are updateTestBatchRelationTesttestBatchWorkflowTestCreate batch instanceCreate batch stage: HCO_LOADINGCreate batch stage: HCP_LOADINGCreate batch stage: RELATION_LOADINGSend entites to HCO_LOADING stageFinish HCO_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend entites to HCP_LOADING stageFinish HCP_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedSend relations to RELATION_LOADING stageFinish RELATION_LOADING stageCheck sender job status - validate if all relations were sent to ReltioCheck processing job status - validate if all relatons were processedGet batch instance and validate completion statusTestBatchTAGSTesttestBatchWorkflowTestCreate HCPGet HCP and check if there is no tagsCreate batch instanceCreate batch stage: TAGS_LOADINGSend request: Append entity tags objectsFinish TAGS_LOADING stageCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusCreate batch instanceCreate batch stage: TAGS_LOADING - DELETESend request: Delete entity tags objectsCheck sender job status - validate if all entities were sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate update statusGet entity and check if tags are removed from ReltioCOMPANYGlobalCustomerIdSearchOnLostMergeEntitiesTesttestCreate first HCP and validate response statusCreate second HCP and validate response statusCreate third HCP and validate response statusMerge HCP2 with HCP3 and validate response statusMerge HCP2 with HCP1 and validate response statusGet entities: filter by COMPANYGlobalCustomerID and HCP1UriValidate if existsGet entities: filter by COMPANYGlobalCustomerID and HCP2UriValidate if existsGet entities: filter by COMPANYGlobalCustomerID and HCP3UriValidate if existsCOMPANYGlobalCustomerIdTesttestCreate HCP_1 with RX_AUDIT crosswalkWait for HCP_CREATED eventCreate HCP_2 with GRV crosswalkWait for HCP_CREATED eventMerge both HCP's with RX_AUDIT being winnerWait for HCP_MERGE, HCP_LOST_MARGE and HCP_CHANGED eventsGet entities by uri and validate. Check if merge succeeded and resulting profile has winner COMPANYId.Update HCP_1: set delete date on RX_AUDIT crosswalkCheck if entity's COMPANYID has not changed after softDeleting the crosswalkGet HCP_1 and validate COMPANYGlobalCustomerID after soft deleting crosswalkRemove HCP_1 by crosswalkRemove HCP_2 by crosswalktestWithDeleteDateCreate HCP_1 with crosswalk delete dateWait for HCP_CREATED eventCreate HCP_2Wait for HCP_CREATED eventMerge both HCP'sWait for HCP_MERGE, HCP_LOST_MARGE and HCP_CHANGED eventsCheck if merge succeeded and resulting profile has winner COMPANYId.Remove HCP_1 by crosswalkRemove HCP_2 by crosswalkRelationEventChecksumTesttestCreate HCP and validate statusGet HCP and validate if existsCreate HCO and validate statusCreate Employment Relation between HCP and HCO - validate response statusWait for RELATIONSHIP_CREATED event and validateFind Relation by id and keep checksumUpdate Relation title attribute and validate responseWait for RELATIONSHIP_CHANGED eventValidate if checksum has changedDelete HCO crosswalk and validateDelete HCP crosswalk and validateDelete Relation crosswalk and validateCreateChangeRequestTestcreateChangeRequestTestCreate Change RequestCreate HCPGet HCP and validateUpdate HCP's First Name with dcrId from Change RequestInit Change Request and validate response is not nullDelete Change RequestDelete HCP's crosswalkAttributesEnricherNoCachedTesttestCreateFailedRelationNoCacheCreate HCOCreate HCPCreate Relation with missing attributes - validate response stats is failedSearch Relation in mogno and check if not existsAttributesEnricherTesttestCreateCreate HCP and validateCreate HCP and validateCreate Relation and validateGet HCP and validate if ProviderAffiliations attribute existsUpdate HCP's Last NameGet HCP and validate if ProviderAffiliations attribute existsCheck last Last Name is updatedRemove HCP, HCO and Relation by crosswalkAttributesEnricherWithDeleteDateOnRelationTesttestCreateAndUpdateRelationWithDeleteDateCreate HCP and validateCreate HCP and validateCreate Relation and validateGet HCP and validate if ProviderAffiliations attribute existsUpdate HCP's Last NameGet HCP and validate if ProviderAffiliations attribute existsCheck if Last Name is updatedSet Relation's crosswalk delete date on now and updateUpdate HCP's Last NameGet HCP and validate that ProviderAffiliations attribute does not existCheck last Last Name is updatedSend update Relation request and check status is deletedAttributesEnricherWithMultipleEndObjectstestCreateWithMultipleEndObjectsCreate HCO_1Create HCO_2Create HCPCreate Relation between HCP and HCO_1Create Relation between HCP and HCO_2Get HCP and validate if ProviderAffiliations attribute existsUpdate HCP's Last NameGet HCP and validate that ProviderAffiliations attribute existsRemove all entitiesUpdateEntityAttributeTestshouldUpdateIdentifierCreate HCP and validateUpdate HCP's attribute: insert idetifier and validateUpdate HCP's attribute: update idetifier and validateUpdate HCP's attribute: merge idetifier and validateUpdate HCP's attribute: replace idetifier and validateUpdate HCP's attribute: delete idetifier and validateRemove all entities by crosswalkCreateEntityTestcreateAndUpdateEntityTestCreate DCR entityGet entity and validateUpdate DCR ID attributeValidate updated entityGet matches entities and validate that response is not nullRemove entityCreateHCPWithoutCOMPANYAddressIdcreateHCPTestCreate HCPGet HCP and validate fieldsGet generatedId from Mongo cache collection keyIdRegistryValidate if created HCP's address has COMPANYAddressIDCheck if COMPANYAddressID equals generatedIdRemove entityGetMatchesTestcreateHCPTestCreate HCP_1Create HCP_2 with similar attributes and valuesGet matches for HCP_1Check if matches size >= 0TranslateLookupsTesttranslateLookupTestSend get translate lookups request: Type=AddressStatus, canonicalCode=A,sourceName=ONEKEYAssert resposne is not nullDelayRankActivationTesttestCreate HCO_ACREATE HCO_B1CREATE HCO_B2CREATE HCO_B3CREATE RELATION B1 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.BNG, source: ONEKEY)CREATE RELATION B2 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.BNG, source: ONEKEY)CREATE RELATION B3 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.BNG, source: ONEKEY)Check UPDATE ATTRIBUTE events:UPDATE RANK event exists with Rank = 3 for B1.AUPDATE RANK event exists with Rank = 2 for B2.ACheck PUBLISHED events:B3 - RELATIONSHIP_CREATED event exists with Rank = 1B1 - RELATIONSHIP_CHANGED event exists with Rank = 3B2 - RELATIONSHIP_CHANGED event exists with Rank = 2Check order of events:B1 - RELATIONSHIP_CHANGED and B2 - RELATIONSHIP_CHANGED are after UPDATE eventsCREATE HCO_B4CREATE RELATION B4 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.BNG, source: GRV)Check UPDATE ATTRIBUTE events:UPDATE RANK event exists with Rank = 4 for B4.ACheck PUBLISHED events:B4 - RELATIONSHIP_CHANGED event exists with Rank = 4Check order of events:B4 - RELATIONSHIP_CHANGED is after UPDATE eventsCREATE HCO_B5CREATE RELATION B5 → A (type: OtherHCOtoHCOAffiliations, rel type: REL.FPA, source: ONEKEY)Check UPDATE ATTRIBUTE events:UPDATE RANK event exists with Rank = 4 for B1.AUPDATE RANK event exists with Rank = 3 for B2.AUPDATE RANK event exists with Rank = 2 for B3.AUPDATE RANK event exists with Rank = 5 for B4.ACheck PUBLISHED events:B1 - RELATIONSHIP_CHANGED event exists with Rank = 4B2 - RELATIONSHIP_CHANGED event exists with Rank = 3B3 - RELATIONSHIP_CHANGED event exists with Rank = 2B4 - RELATIONSHIP_CHANGED event exists with Rank = 5B5 - RELATIONSHIP_CREATED event exists with Rank = 1Check order of events:All published RELATIONSHIP_CHANGED are after UPDATE_RANK eventsSet deleteDate on B1.ACheck UPDATE ATTRIBUTE events:UPDATE RANK event exists with Rank = 4 for B4.ACheck PUBLISHED events:B4 - RELATIONSHIP_CHANGED event exists with Rank = 4Check order of events:Published RELATIONSHIP_CHANGED is after UPDATE_RANK eventGet B2.A relation and check Rank = 3Get B3.A relation and check Rank = 2Get B4.A relation and check Rank = 4Get B5.A relation and check Rank = 1Clear dataRawDataTestshouldRestoreHCPCreate HCP entityDelete HCP by crosswalkSearch entity by name - expected not foundRestore HCP entitySearch entity by nameClear datashouldRestoreHCOCreate HCO entityDelete HCO by crosswalkSearch entity by name - expected not foundRestore HCO entitySearch entity by nameClear datashouldRestoreRelationCreate HCP entityCreate HCO entityCreate relation from HCP to HCODelete relation by crosswalkGet relation by crosswalk - expected not foundRestore relationGet relation by crosswalkClear dataTestBatchUpdateAttributesTesttestBatchWorkFlowTestCreate 2 x HCP and validate respons statusGet entities and validate if they are createdTest Insert IdentifiersCreate batch instanceCreate batch stage: UPDATE_ATTRIBUTES_LOADINGInitialize UPDATE_ATTRIBUTES_LOADING stageSend updateEntityAttributeRequest objects with different identifiersFinish UPDATE_ATTRIBUTES_LOADING stageCheck sender job status - validate if all updates are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if inserted identifiers are visible in Reltio)Test Update IdentifiersCreate batch instanceCreate batch stage: UPDATE_ATTRIBUTES_LOADINGInitialize UPDATE_ATTRIBUTES_LOADING stageSend updateEntityAttributeRequest objects with different identifiersFinish UPDATE_ATTRIBUTES_LOADING stageCheck sender job status - validate if all updates are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if updated identifiers are visible in Reltio)Test Merge IdentifiersCreate batch instanceCreate batch stage: UPDATE_ATTRIBUTES_LOADINGInitialize UPDATE_ATTRIBUTES_LOADING stageSend updateEntityAttributeRequest objects with different identifiersFinish UPDATE_ATTRIBUTES_LOADING stageCheck sender job status - validate if all updates are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if merged identifiers are visible in Reltio)Test Replace IdentifiersCreate batch instanceCreate batch stage: UPDATE_ATTRIBUTES_LOADINGInitialize UPDATE_ATTRIBUTES_LOADING stageSend updateEntityAttributeRequest objects with different identifiersFinish UPDATE_ATTRIBUTES_LOADING stageCheck sender job status - validate if all updates are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if replaced identifiers are visible in Reltio)Test Delete IdentifiersCreate batch instanceCreate batch stage: UPDATE_ATTRIBUTES_LOADINGInitialize UPDATE_ATTRIBUTES_LOADING stageSend updateEntityAttributeRequest objects with different identifiersFinish UPDATE_ATTRIBUTES_LOADING stageCheck sender job status - validate if all updates are sent to ReltioCheck processing job status - validate if all entities were processedGet batch instance and validate completion statusGet entities and validate update status (check if deleted identifiers are visible in Reltio)Remove all entities by crosswalk and all batch instances by id"
},
{
"title": "Integration Test For COMPANY Model China",
"pageID": "302681804",
"pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+China",
"content": "Test classTest caseFlowChinaComplexEventCaseshouldCreateHCPAndConnectWithAffiliatedHCOByNameCreate HCO (AffiliatedHCO) and validate responseGet entities with filter by HCO's Name and entityTypeValidate if existsCreate HCP (V2Complex method)with not existing MainHCOwith affiliatedHCO and existing HCO's NameGet HCP and validateCheck if affiliatedHCO Uri equals created HCO uri (Workplace)Remove entitiesshouldCreateHCPAndMainHCOCreate HCO (AffiliatedHCO) and validate responseCreate HCP (V2Complex method)with AffiliatedHCO - set uri from previously created HCOwith MainHCO without uriGet HCP and validateCheck if affiliatedHCO Uri equals created HCO uri (Workplace)Validate Workplace attributesRemove entitiesshouldCreateHCPAndAffiliatedHCOCreate HCO (MainHCO) and validate responseCreate HCP (V2Complex method)with AffiliatedHCO without uri (not existing HCO)with MainHCO - set objectURI from previously created Main HCOGet HCP and validateCheck if MainHCO Uri equals created HCO uri (MainWorkplace)Validate MainWorkplace attributesRemove entitiesshouldCreateHCPAndConnectWithAffiliationsCreate HCO (MainHCO) and validate responseCreate HCO (AffiliatedHCO) and validate responseCreate HCP (V2Complex method)with AffiliatedHCO - set uri from previously created Affiliated HCOwith MainHCO - set objectURI from previously created Main HCOGet HCP and validateCheck if affiliatedHCO Uri equals created HCO uri (Workplace)Check if MainHCO Uri equals created HCO uri (MainWorkplace)Validate Workplace and MainWorkplace attributesRemove entitiesshouldCreateHCPAndAffiliationsCreate HCP (V2Complex method)without AffialitedHCO uriwithout MainHCO objectURIGet HCP and validateCheck if Workplace is created and has correct attributesCheck if MainWorkplace is created and has correct attributesValidate Workplace and MainWorkplace attributesRemove entitiesChinaSimpleEventCaseshouldPublishCreateHCPInIqiviaModelCreate HCP in COMPANYModel (V2Simple method)Validate responseGet HCP entity and validate attributesWait for Kafka output eventValidate eventValidate attributes and check if event is in IqiviaModelRemove entitiesChinaMergeEntityTestCraete HCP_1 (V2Complex method) and validate responseCraete HCP_2 (V2Complex method) and validate responseMerge entities HCP_1 and HCP_2Get HCP by HCP_1 uri and check if existsWait for Kafka event on merge response topicValidate Kafka eventRemove entitiesChinaWorkplaceValidationEntityTestshouldValidateMainHCOCreate HCP (V2Complex method)with 2 affiliatedHCO which do not existwith 1 MainHCO which does not existGet HCP entity and check if existWait for Kafka event on response topicValidate Kafka eventValidate MainWorkplace (1 exists)Validate Workplaces (2 exists)Validate MainHCO (1 exists)Assert MainWorkplace equals MainHCORemove entities"
},
{
"title": "Integration Test For COMPANY Model DCR2Service",
"pageID": "302681794",
"pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+DCR2Service",
"content": "Test classTest caseFlowDCR2ServiceTestshouldCreateHCPTestCreate HCO and validate responseCreate DCR request (hcp-create)Send Apply Change requestGet DCR status and validateValidate created entityRemove entitiesshouldUpdateHCPChangePrimarySpecialtyTestCreate HCPCreate DCR request: update HCP Primary SpecialityValidate DCR responseApply Change requestGet DCR status and validateGet HCP and validateGet DCR and validateRemove all entitiesshouldCreateHCOTestCreate DCR Request (hco-create) and validate responseApply Change requestGet DCR status and validateGet HCO and validateGet DCR and validateRemove all entitiesshouldUpdateHCPChangePrimaryAffiliationTestCreate HCO_1 and valdiate responseCreate HCO_2 and validate responseCreate HCP with affiliations and validate reponseGet HCO_1 and save COMPANYGlobalCustomerIdGet HCP and save COMPANYGlobalCustomerIdGet entities - search by HCO_1's COMPANYGlobalCustomerId and check if existsGet entities - search by HCP's COMPANYGlobalCustomerId and check if existsCreate DCR Request and validate response: update HCP primary affiliationApply Change requestGet DCR status and validateGet HCP and validateGet DCR and validateRemove all entitiesshouldUpdateHCPIgnoreRelationCreate HCO_1 and valdiate responseCreate HCO_2 and validate responseCreate HCP with affiliations and validate reponseGet HCO_1 and save COMPANYGlobalCustomerIdGet HCP and save COMPANYGlobalCustomerIdGet entities - search by HCO_1's COMPANYGlobalCustomerId and check if existsGet entities - search by HCP's COMPANYGlobalCustomerId and check if existsCreate DCR Request and validate response: ignore affiliationApply Change requestGet DCR status and validateWait for RELATIONSHIP_CHANGED eventWait for RELATIONSHIP_INACIVATED eventGet HCP and validateGet DCR and validateRemove all entitiesshouldUpdateHCPAddPrimaryAffiliationTestCreate HCO and validate responseCreate HCP and validate responseCreate DCR Request: HCP update added new primary affiliationValidate DCR responseApply Change requestGet DCR status and validateGet HCP and validateGet DCR and validateRemove all entitiesshouldUpdateHCOAddAffiliationTestCreate HCO_1 and validateCreate HCO_2 and validateCreate DCR Request: update HCO add other affiliation (OtherHCOtoHCOAffiliations)Validate DCR responseApply Change requestGet DCR status and validateGet HCO's connections (OtherHCOtoHCOAffiliations) and validateGet DCR and validateRemove all entitiesshouldInactivateHCPCreate HCP and validate responseCreate DCR Request: Inactivate HCPValidate DCR responseApply Change requestGet DCR status and validateGet HCP and validateGet DCR and validateRemove all entitiesshouldUpdateHCPAddPrivateAddressCreate HCP and validate responseCreate DCR Request: update HCP - add private addressValidate DCR responseApply Change requestGet DCR status and validateGet HCP and validateGet DCR and validateRemove all entitiesshouldUpdateHCPAddAffiliationToNewHCOCreate HCO and validate responseCreate HCP and validate responseCreate DCR Request: update HCP - add affiliation to new HCOValidate DCR responseApply Change requestGet DCR status and validateGet HCP and validateGet HCO entity by crosswalk and save uriGet DCR and validateRemove all entitiesshouldReturnValidationErrorCreate DCR request with unknown entityUriValidate DCR response and check if REQUEST_FAILEDshouldCreateHCPOneKeyCreate HCP and validate responseCreate DCR Request: create OneKey HCPValidate DCR responseGet DCR status and validateGet HCP and validateGet DCR and validateRemove all entitiesshouldCreateHCPOneKeySpecialityMappingCreate HCP and validate responseCreate DCR Request: create OneKey HCP with speciality valueValidate DCR responseGet DCR status and validateGet HCP and validateGet DCR and validateRemove all entitiesshouldCreateHCPOneKeyRedirectToReltioCreate HCP and validate responseCreate DCR Request: create OneKey HCP with speciality value "not found key"Validate DCR responseApply Change RequestGet DCR status and validateGet HCP and validateGet DCR and validateRemove all entitiesshouldCreateHCOOneKeyCreate HCO nad validate responseCreate DCR Request: create OneKey HCOValidate DCR responseGet DCR status and validateGet HCO and validateGet DCR and validateRemove all entitiesshouldReturnMissingDataExceptionCreate DCR Request with missing dataValidate DCR response: status = REQUEST_REJECTED and response has correct messageshouldReturnForbiddenAccessExceptionCreate DCR Request with forbidden access dataValidate DCR response: status = REQUEST_FAILED and response has correct messageshouldReturnInternalServerErrorCreate DCR Request with internal server error dataValidate DCR response: status = REQUEST_FAILED and response has correct message"
},
{
"title": "Integration Test For COMPANY Model Region AMER",
"pageID": "302681796",
"pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+Region+AMER",
"content": "Test classTest caseFlowMicroBrickTestshouldCalculateMicroBricksCreate HCP and validate responseWait for event on ChangeLog topic with specified countryGet HCP entity and validate MicroBrickUpdate HCP with new zip codes and valdiate responseWait for event on ChangeLog topic with specified countryGet HCP entity and validate MicroBrickDelete entitiesValidateHCPTestvalidateHCPTestCreate HCP and validate response statusCreate validation request with valid paramsAssert if response is ok and validation status is "Valid"validateHCPTestNotValidCreate HCP and validate response statusCreate validation request with not valid paramsAssert if response is ok and validation status is "NotValid"validateHCPLookupTestCreate HCP with "Speciality" attribute and validate response statusCreate lookup validation request with "Speciality" attributeAssert if response is ok and validation status is "Valid""
},
{
"title": "Integration Test For COMPANY Model Region EMEA",
"pageID": "347655258",
"pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+Region+EMEA",
"content": "Test classTest caseFlowAutofillTypeCodeTestshouldProcessNonPrescriberCreate HCP entityValidate type code value is Non-Prescriber on output topicInactivate HCP entityValidate type code value is Non-Prescriber on history inactive topicDelete entityshouldProcessPrescriberCreate HCP entityValidate type code value is Prescriber on output topicInactivate HCP entityValidate type code value is Prescriber on history inactive topicDelete entityshouldProcessMergeCreate first HCP entityValidate type code is Prescriber on output topicCreate second HCP entityValidate type code is Non-Prescriber on output topicMerge entitiesValidate type code is Prescriber on output topicInactivate first entityValidate type code is Non-PrescriberDelete second entity crosswalkValidate entity has end date on output topicValidate type code value is Prescriber on output topicDelete entityshouldNotUpdateTypeCodeCreate HCP entity with correct type code valueValidate there is no type code value provided by HUB technical source on output topicDelete entityshouldProcessLookupErrorsCreate HCP entity with invalid sub type code and speciality valuesValidate type code value is concatenation of sub type code and speciality values on output topicInactivate HCP entityValidate type code value is concatenation of sub type code and speciality values on history inactive topicDelete entity"
},
{
"title": "Integration Test For COMPANY Model Region US",
"pageID": "302681784",
"pageLink": "/display/GMDM/Integration+Test+For+COMPANY+Model+Region+US",
"content": "Test classTest caseFlowCRUDMCOAsynctestSend MCORequest to Kafka topicWait for created eventValidate created MCOUpdate MCO's nameSend MCORequest to Kafka topicWait for updated eventValidate updated entityDelete all entitiesTestBatchMCOTesttestBatchWorkflowTestCreate batch instance: testBatchCreate MCO_LOADNIG stageSend MCO entities to MCO_LOADNIG stageFinish MCO_LOADNIG stageCheck sender job status - get batch instance and validate if all entities are createdCheck processing job status - get batch instance and validate if all entties are processedGet batch instance and check batch completion statusGet entities by crosswalk and check if all are createdRemove all entitiestestBatchWorkflowTest_SendEntities_Update_and_MD5SkipCreate batch instance: testBatchCreate MCO_LOADNIG stageSend MCO entities to MCO_LOADNIG stageFinish MCO_LOADNIG stageCheck sender job status - get batch instance and validate if all entities are createdCheck processing job status - get batch instance and validate if all entties are processedGet batch instance and check batch completion statusGet entities by crosswalk and check if all are createdCreate batch instance: testBatchCreate MCO_LOADNIG stageSend MCO entities to MCO_LOADNIG stage (skip 2 entities MD5 checksum changed)Finish MCO_LOADNIG stageCheck sender job status - get batch instance and validate if all entities are createdCheck processing job status - get batch instance and validate if all entties are processedGet batch instance and check batch completion statusGet entities by crosswalk and check if all are createdRemove all entitiesMCOBundlingTesttestSend multiple MCORequest to kafka topicWait for created event for every MCORequestCheck if number of recived events equals number of sent requestsSet crosswalk's delete date on now for every requestSend all updated MCORequests to Kafka topicWait for deleted event for every MCORequestEntityEventChecksumTesttestCreate HCPWait for HCP_CREATED eventGet created HCP by uri and check if existsFind by id created HCP in mogno and save "checksum"Update HCP's attribute and send requestWait for HCP_CHANGED eventFind by id created HCP in mogno and saveCheck if old checksum is different than current checksumRemove HCPWait for HCP_REMOVED eventEntityEventsTesttestCreate MCOWait for ENTITY_CREATED eventUpdate MCOWait for ENTITY_CHANGED eventRemove MCOWait for ENTITY_REMOVED eventHCPEventsMergeTesttestCreate HCP_1 and validate responseWait for HCP_CREATED eventGet HCP_1 and validate attributesCreate HCP_2 and validate responseGet HCP_2 and validate attributesMerge HCP_1 and HCP_2Wait for HCP_MERGED eventGet HCP_2 and validate attributesDelete HCP_1 crosswalkWait for HCP_CHANGED event and validate HCP_URIDelete HCP_1 and HCP_2 crosswalksWait for HCP_REMOVED eventDelete HCP_2 crosswalkHCPEventsNotTrimmedMergeTesttestCreate HCP_1 and validate responseWait for HCP_CREATED eventGet HCP_1 and validate attributesCreate HCP_2 and validate responseGet HCP_2 and validate attributesMerge HCP_1 and HCP_2Wait for HCP_MERGED event and validate attributesGet HCP_2 and validate attributesDelete HCP_1 crosswalkWait for HCP_CHANGED event and validate HCP_URIDelete HCP_1 and HCP_2 crosswalksWait for HCP_REMOVED eventDelete HCP_2 crosswalkMCOEventsTesttestCreate MCO and validate reponseWait for MCO_CREATED event and validate urisUpdate MCO's name and validate responseWait for MCO_CHANGED event and validate urisDelete MCO's crosswalk and validate response statusWait for MCO_REMOVED event and validate urisRemove entitiesPotentialMatchLinkCleanerTestCreate HCO: Start FLEXGet HCO and validateCreate HCO: End ONEKEYGet HCO and validateGet matches by Start FLEX HCO entityIdValidate matchesGet not matches by Start FLEX HCO entityIdValidate - not match does not existGet Start FLEX HCO from mongo entityMatchesHistory collectionValidate matches from mongoCreate DerivedAffiliation - realtion between FLEX and HCOGet matches by Start FLEX HCO entityIdCheck if there is no matchesGet not matches by Start FLEX HCO entityIdValidate not matches responseRemove all entitiesUpdateMCOTesttest1_createMCOTestCreate MCO and validate responseGet MCO by uri and validateRemove entitiestest2_updateMCOTestCreate MCO and validate responseUpdate MCO's nameGet MCO by uri and validateRemove entitiestest3_createMCOBatchTestCreate multiple MCOs using postBatchMCOValidate responseRemove entitiesUpdateUsageFlagsTesttest1_updateUsageFlagsCreate HCP and validate responseGet entities using filter (Country & Uri) and validate if HCP existsGet entities using filter (Uri) and validate if HCP existsUpdate usage flags and validate responseGet entity and validate updated usage flagstest2_updateUsageFlagsCreate HCO and validate responseGet entities using filter (Country & Uri) and validate if HCO existsGet entities using filter (Uri) and validate if HCO existsUpdate usage flags and validate responseGet entity and validate updated usage flagstest3_updateUsageFlagsCreate HCO with 2 addresses (COMPANYAddressId=3001 and 3002) and validate responseGet entities using filter (Country & Uri) and validate if HCO existsGet entities using filter (Uri) and validate if HCO existsUpdate usage flags (COMPANYAddressId = 3002, action=set) and validate responseUpdate usage flags (COMPANYAddressId = 3001, action=set) and validate responseGet entity and validate updated usage flagsRemove usage flag and validate responseGet entity and validate updated usage flagsClear usage flag and validate responseget entity and validate updated usage flags "
},
{
"title": "MDM Factory",
"pageID": "164470002",
"pageLink": "/display/GMDM/MDM+Factory",
"content": "\nMDM Client Factory was implemented in MDM manager to select a specific MDM Client (Reltio/Nucleus) based on a client selector configuration. Factory allows to register multiple MDM Clients on runtime and choose it based on country. To register Factory the following example configuration needs to be defined:\n\n\tclientDecisionTable\n\n\n\nBased on this configuration a specific request will be processed by Reltio or Nucleus. Each selector has to define default view for a specific client. For example, 'ReltioAllSelector' has a definition of a default and PforceRx view which corresponds to two factory clients with different user name to Reltio.\n\n\n\tmdmFactoryConfig\n\n\n\nThis map contains MDM Factory Clients. Each client has a specific unique name and a configuration with URL, username, ●●●●●●●●●●●● other specific values defined for a Client. This unique name is used in decision table to choose a factory client based on country in request.\n "
},
{
"title": "Mulesoft integration",
"pageID": "447577227",
"pageLink": "/display/GMDM/Mulesoft+integration",
"content": "DescriptionMulesoft platform is integration portal that is used to integrate Clients from inside and outside of COMPANY network with MDM Hub. Mule integrationAPI Endpoints/search/hcp : The operation allows to search for HCPs in a country with multiple filter criteria.MDM compiles the final data for a Profile (Golden Profile) when the data for it is requested./search/hco: The operation allows to search for HCOs in a country with multiple filter criteria./hcp : The API allows management of HCPs in MDM. (Get, Create, Update)/hco : The API allows management of HCOs in MDM. (Get, Create, Update)/lookups : This operation allows to fetch the list of values configured in MDM/subscriptions/hcp : This operation allows to 'subscribe to' multiple HCP Profiles in a singlerequest. The subscription is done by allowing a source create a 'crosswalk' of the source systemon the profile. It also allows the source system to insert all data that the source system has for therespective profile in MDM while subscribing. The request specification is same as /hcp POST but itexpects an array of profiles. The subscription works in conjunction with Kafka events that aretriggered from MDM for any 'subscribed' profiles that are modified by any other source system./entities/{countryType} : This operation directly allows to query MDM Reltio for Entity withcustom Filter criteria. It allows to decide if the response needs to be formatted or if data isrequired without formatting - as it is provided by MDM./batch/hcp: This resource allows management of multiple HCPs in MDM at a time. (Create,Update)/batch/hco: This resource allows management of multiple HCOs in MDM at a time. (Create,Update)/search/connection: This resource allows to view relationships an object (HCP, HCO) has onelevel in selected direction (up, down, both)MuleSoft API Catalog:Requests routing on Mule sideBelow values can change. Please check in source MDM Tenant URL Configuration - AIS Application Integration Solutions Mule - ConfluenceAPI Country MappingTenantDevTest (QA)StageProdUSUSUSUSUSEMEAUK,IE,GB,SA,EG,DZ,TN,MA,AE,KW,QA,OM,BH,NG,GH,KE,ET,ZW,MU,IQ,LB,JO,ZA,BW,CI,DJ,GQ,GA,GM,GN,GW,LR,MG,ML,MR,SN,SL,TG,MW,TZ,UG,RW,LS,NA,SZ,ZM,IR,SY,CD,LY,AO,BJ,BF,BI,CM,CV,CF,TD,CG,SD,YE,FR,DE,IT,ES,TF,PM,WF,MF,BL,RE,NC,YT,MQ,GP,GF,PF,MC,AD,SM,VA,TR,AT,BE,LU,DK,FO,GL,FI,NL,NO,PT,SE,CH,CZ,GR,CY,PL,RO,SK,IL,AL,AM,IO,GE,IS,MT,NE,RS,SI,MEUK,IE,GB,SA,EG,DZ,TN,MA,AE,KW,QA,OM,BH,NG,GH,KE,ET,ZW,MU,IQ,LB,JO,ZA,BW,CI,DJ,GQ,GA,GM,GN,GW,LR,MG,ML,MR,SN,SL,TG,MW,TZ,UG,RW,LS,NA,SZ,ZM,IR,SY,CD,LY,AO,BJ,BF,BI,CM,CV,CF,TD,CG,SD,YE,FR,DE,IT,ES,TF,PM,WF,MF,BL,RE,NC,YT,MQ,GP,GF,PF,MC,AD,SM,VA,TR,AT,BE,LU,DK,FO,GL,FI,NL,NO,PT,SE,CH,CZ,GR,CY,PL,RO,SK,IL,AL,AM,IO,GE,IS,MT,NE,RS,SI,MEUK,IE,GB,SA,EG,DZ,TN,MA,AE,KW,QA,OM,BH,NG,GH,KE,ET,ZW,MU,IQ,LB,JO,ZA,BW,CI,DJ,GQ,GA,GM,GN,GW,LR,MG,ML,MR,SN,SL,TG,MW,TZ,UG,RW,LS,NA,SZ,ZM,IR,SY,CD,LY,AO,BJ,BF,BI,CM,CV,CF,TD,CG,SD,YE,FR,DE,IT,ES,TF,PM,WF,MF,BL,RE,NC,YT,MQ,GP,GF,PF,MC,AD,SM,VA,TR,AT,BE,LU,DK,FO,GL,FI,NL,NO,PT,SE,CH,CZ,GR,CY,PL,RO,SK,IL,AL,AM,IO,GE,IS,MT,NE,RS,SI,MEUK,GB,IE,AE,AO,BF,BH,BI,BJ,BW,CD,CF,CG,CI,CM,CV,DJ,DZ,EG,ET,GA,GH,GM,GN,GQ,GW,IQ,IR,JO,KE,KW,LB,LR,LS,LY,MA,MG,ML,MR,MU,MW,NA,NG,OM,QA,RW,SA,SD,SL,SN,SY,SZ,TD,TG,TN,TZ,UG,YE,ZA,ZM,ZW,FR,DE,IT,ES,AD,BL,GF,GP,MC,MF,MQ,NC,PF,PM,RE,TF,WF,YT,SM,VA,TR,AT,BE,LU,DK,FO,GL,FI,NL,NO,PT,SE,CH,CZ,GR,CY,PL,RO,SK,ILAMERCA,BR,AR,UY,MX,CL,CO,PE,BO,ECCA,BR,AR,UY,MX,CL,CO,PE,BO,ECCA,BR,AR,UY,MX,CL,CO,PE,BO,ECCA,BR,AR,UY,MXAPACAU,NZ,IN,KR,JP,HK,ID,MY,PK,PH,SG,TW,TH,VN,MO,BN,BD,NP,LK,MNAU,NZ,IN,KR,JP,HK,ID,MY,PK,PH,SG,TW,TH, VN,MO,BN,NP,LK,MNKR,JP,AU,NZ,IN,HK,ID,MY,PK,PH,SG,TW,TH, VN,MO,BN,NP,LK,MNKR,JP,AU,NZ,IN,HK,ID,MY,PK,PH,SG,TW,TH, VN,MO,BNEXUS (IQVIA)Everything elseEverything elseEverything elseEverything elseAPI URLsMuleSoft MDM HCP Reltio API URLsEnvironmentCloud APIGround APIDevhttps://muleapic-amer-dev.COMPANY.com/mdm-hcp-reltio-dlb-v1-devhttp://mule4api-comm-amer-dev.COMPANY.com/mdm-hcp-reltio-v1/Testhttps://muleapic-amer-dev.COMPANY.com/mdm-hcp-reltio-dlb-v1-tst/http://mule4api-comm-amer-tst.COMPANY.com/mdm-hcp-reltio-v1Stagehttps://muleapic-amer-stg.COMPANY.com/mdm-hcp-reltio-dlb-v1-stghttp://mule4api-comm-amer-stg.COMPANY.com/mdm-hcp-reltio-v1Prodhttps://muleapic-amer.COMPANY.com/mdm-hcp-reltio-dlb-v1http://mule4api-comm-amer.COMPANY.com/mdm-hcp-reltio-v1IntegrationsIntegrations can be found under below url:MDM - AIS Application Integration Solutions Mule - ConfluenceMule documentation referenceSolution Profiles/MDM https://confluence.COMPANY.com/display/AAISM/MDMMDM HCP Reltio APIhttps://confluence.COMPANY.com/display/AAISM/MDM+HCP+Reltio+APIMDM Tenant URL Configurationhttps://confluence.COMPANY.com/display/AAISM/MDM+Tenant+URL+ConfigurationUsing OAuth2 for API AuthenticationDescribed how to use OAuth2How to use an APIDescribed how to request access to API and how to use itConsumer On-boardingDescribed consumer onboarding process"
},
{
"title": "Multi view",
"pageID": "164470089",
"pageLink": "/display/GMDM/Multi+view",
"content": "\nDuring getEntity or getRelation operation "ViewAdapterService" is activated. This feature contains two steps:\n\n\tAdapt\n\n\n\nBased on the following map each entity will be checked before return:\n\nThis means that for PforceRx view, only entities with source CRMMI will be returned. Otherwise getEntity or getRelation operations will return "404" EntityNotFound exception. \nWhen entity can be returned with success the next step is started: \n\n\tFilter\n\n\n\nEach entity is filtered based on attribute Uris list provided in crosswalks.attribute list.\nThe process will take each attribute from entity and will check if this attribute exists in restricted for specific source crosswalk attribute list. When this attribute is not on restricted list, then it will be removed from entity. This way we will receive entity for specific view only with attribute restricted for specific source.\nMDM publishing HUB has an additional configuration for multi view process. When an entity with a specific country suits the configuration, getEntity operation is invoked with country and view name parameter. Then MDM gateway Factory is activated, and entity is returned from a specific Reltio instance and saved in a mongo collection suffixed with a view name.\n \nFor this configuration entities from BR country will be saved in entityHistory and entityHistory_PforceRx mongo collections. In the view collection entities will be adapted and filtered by View Adapter Service. "
},
{
"title": "Playbook",
"pageID": "218437749",
"pageLink": "/display/GMDM/Playbook",
"content": "The document depicts how to request access to different sources. "
},
{
"title": "Issues list",
"pageID": "218441145",
"pageLink": "/display/GMDM/Issues+list",
"content": ""
},
{
"title": "Add a user to a new group.",
"pageID": "218438493",
"pageLink": "/pages/viewpage.action?pageId=218438493",
"content": "To create a request you need to use  a link:https://requestmanager1.COMPANY.com/Group/Then choose as follow:Than search a group and click request access:As the last step, you need to choose the 'View Cart' button and submit your request. "
},
{
"title": "Snowflake new schema/group/role creation",
"pageID": "218437752",
"pageLink": "/pages/viewpage.action?pageId=218437752",
"content": "Connect with: https://digitalondemand.COMPANY.com/Click 'Get Support' button.3. Then click that one:4. And as a next step:5. Now you are on create ticket site. The most important thing is to place a proper queue name in a detailed description place. For example a queue name for Snowflake issues looks like this:  gbl-atp-commercial snowflake domain admin. I recommend to you to place it as a first line. And then the request text is required.6. There is a typical request for a new schema:gbl-atp-commercial snowflake domain adminHello,\nI'd like to ask to create a new schema and new roles on Snowflake side.\nNew schema name: PTE_SL\nEnvironments: DEV, QA, STG, PROD, details below:\nDEV\t\nSnowflake instance: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com\t\nSnowflake DB name:COMM_GBL_MDM_DMART_DEV_DB\nQA\t\nSnowflake instance: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com\t\nSnowflake DB name: COMM_GBL_MDM_DMART_QA_DB\nSTG\t\nSnowflake instance: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.com\t\nSnowflake DB name:COMM_GBL_MDM_DMART_STG_DB\nPROD\t\nSnowflake instance: https://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com\t\nSnowflake DB name: COMM_GBL_MDM_DMART_PROD_DB\n\nAdd new roles with names (one for each environment): COMM_GBL_MDM_DMART_[Dev/QA/STG/Prod]_PTE_ROLE\nwith read-only acces on Customer_SL & PTE_SL\nand\nadd a roles with full acces to new schema with names (one for each environment) COMM_GBL_MDM_DMART_[Dev/QA/STG/Prod]_DEVOPS_ROLE - like in customer_sl schema7. If you are requesting for a new role too - like in an example above - you need to request to add this role to AD. In this case you need to provide primary and secondary owner details for all groups to be created. You can send a primary ana a secondary owner data or write that the ownership should be set like in another existing role. 8. Ticket example: https://digitalondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=RF3490743 "
},
{
"title": "AWS ELB NLB configuration request",
"pageID": "218440089",
"pageLink": "/display/GMDM/AWS+ELB+NLB+configuration+request",
"content": "To create a ticket use this link: http://btondemand.COMPANY.com/Please follow this link if you want to know all the specific steps and click: Snowflake new schema/group/role creationRemember to add a proper queue name!In a request please attached full list of general information:VPCELB TypeHealth ChecksAllowed incoming traffic fromThen please add a specific ELB NLB information FOR EACH NLB ELB you requested for - even if the information is the same and obvious:ListenerTarget Group No of ELBTypeEnvironmentELB Health CheckTarget Group additional information: e.x: 1 Target group with 3 servers:portWhere to add a Listener: e.x.: Listener to be added in ELB #Listner NameSecurity Group informationAdditional information: e.x: IP ●●●●●●●●●●●● mdm-event-handler (Prod) should be able to access this ELBTicket example: http://btod.COMPANY.com/My-Tickets/Ticket-Details?ticket=IM40983303E.g. request text:VPC: Public\nELB Type: Network Load Balancer\nHealth Checks: Passive\nAllowed incoming traffic from:\n●●●●●●●●●●●● mdm-event-handler (Prod)\n\n1. API\nListener:\napi-emea-prod-gbl-mdm-hub-ext.COMPANY.com:8443\n\nTarget Group:\neuw1z2pl116.COMPANY.com:8443\neuw1z1pl117.COMPANY.com:8443\neuw1z2pl118.COMPANY.com:8443\n\n2. KAFKA\n\n2.1\nListener:\nkafka-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095\nTG:\neuw1z2pl116.COMPANY.com:9095\neuw1z1pl117.COMPANY.com:9095\neuw1z2pl118.COMPANY.com:9095\n\n2.2\nListener:\nkafka-b1-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095\nTG:\neuw1z2pl116.COMPANY.com:9095\n\n2.3\nListener:\nkafka-b2-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095\nTG:\neuw1z1pl117.COMPANY.com:9095\n\n2.4\nListener:\nkafka-b3-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095\nTG:\neuw1z2pl118.COMPANY.com:9095\n\nGBL-BTI-EXT HOSTING AWS CLOUD"
},
{
"title": "To open a traffic between hosts",
"pageID": "218441143",
"pageLink": "/display/GMDM/To+open+a+traffic+between+hosts",
"content": "To create a ticket using this link: http://btondemand.COMPANY.com/Please follow this link if you want to know all the specific steps and click: Snowflake new schema/group/role creationRemember to add a proper queue name!In a request please attached the full list of general information:SourceIP rangeIP range....Targets - remember to add each targets instancesTarget1NameCnameAddressPortTarget2........Example ticket: http://btod.COMPANY.com/My-Tickets/Ticket-Details?ticket=IM41240161Example request text:Source:\n1. IP range: ●●●●●●●●●●●●●\n2. IP range: ●●●●●●●●●●●●●\n\nTarget1:\nLoadBalancer:\ngbl-mdm-hub-us-prod.COMPANY.com canonical name = internal-pfe-clb-atp-mdmhub-us-prod-001-146249044.us-east-1.elb.amazonaws.com.\nName: internal-pfe-clb-atp-mdmhub-us-prod-001-146249044.us-east-1.elb.amazonaws.com\nAddress: ●●●●●●●●●●●●●●\nName: internal-pfe-clb-atp-mdmhub-us-prod-001-146249044.us-east-1.elb.amazonaws.com\nAddress: ●●●●●●●●●●●●●●\nTarget port: 443\n\nTarget2:\nhosts:\namraelp00007848.COMPANY.com(●●●●●●●●●●●●●●)\namraelp00007849.COMPANY.com(●●●●●●●●●●●●●)\namraelp00007871.COMPANY.com(●●●●●●●●●●●●●●)\ntarget port: 8443"
},
{
"title": "Support information with queue and DL names",
"pageID": "218438484",
"pageLink": "/display/GMDM/Support+information+with+queue+and+DL+names",
"content": "There are a few places when you can send your request:https://digitalondemand.COMPANY.com/getsupporthttps://requestmanager.COMPANY.com/Caution! When we are adding a new client to our architecture there is a MUST to get from him a support queue.Support queuesSystem/component/area nameDedicated queueSupport DLAdditional notesRapid, Digital Labs, GCP etcGBL-EPS-CLOUD OPS FULL SUPPORTEPS-CloudOps@COMPANY.comAWS Global, EMEA environmentsIOD AWS TeamGBL-BTI-IOD AWS FULL SUPPORTEPS-CloudOps@COMPANY.com (same as EPS, not a mistake)Rotating AWS keys, AWS GBL US, AWS FLEX USIODGBL-BTI-IOD FULL OS SUPPORT (VMC)VMware CloudFLEX TeamGBL-F&BO-MAST AMM SUPPORTDL-CBK-MAST@COMPANY.comData, file transfer issues in US FLEX environmentsSAP Interface Team (FLEX)GBL-SS SAP SALES ORDER MGMTQueries regarding SAP FLEX input filesSAP Master Date Team (FLEX)Dianna.OConnell@COMPANY.comQueries regarding data in SAP FLEXNetwork TeamGBL-NETWORK DDIAll domain and DNS changesFirewall TeamGBL-NETWORK ECSGBL-NETWORK-SCS@COMPANY.com"Big" firewall changesSnowflakeGBL-ATP-COMMERCIAL SNOWFLAKE DOMAIN ADMINMDM Hub - non-prodGBL-ADL-ATP GLOBAL MDM - HUB DEVOPSDL-ATP_MDMHUB_SUPPORT@COMPANY.comMDM Hub - prodGBL-ADL-ATP GLOBAL MDM - HUB DEVOPSDL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.comPDKSGBL-BAP-Kubernetes Service L2PDCSOps@COMPANY.comPDKS Kubernetes cluster, ie. new MDM Hub Amer NPRODGo to http://containers.COMPANY.com/ "PDKS Get Help" for details.PDKS Engineering TeamGBL-BTI-SYSTEMS ENGINEERING BTCSDL-PDCS-ADMIN@COMPANY.comPDKS Kubernetes - For Environment provisioning/modification issues with CloudBrokerage/IODAMER/APAC/EMEA/GBLUS Reltio - COMPANYGBL-ADL-ATP GLOBAL MDM - RELTIODL-ADL-ATP-GLOBAL_MDM_RELTIO@COMPANY.comTeam responsible for Reltio and ETL batch loads.GBL/USFLEX Reltio - IQVIAGBL-MDM APP SUPPORTCOMPANY-MDM-Support@iqvia.comDL-Global-MDM-Support@COMPANY.comReltio consultingN/ASumit Singh - reltio consulting (NO support)sumit.singh@reltio.comSumit.Singh@COMPANY.comIt is no support, we can use that contact on technical issues level (API implementation etc) Reltio UI with data accesuse request manager: https://requestmanager.COMPANY.com/Reltio Commercial MDM - GBLUSReltio Customer MDM - GBLPing FederateDL-CIT-PXEDOperations@COMPANY.comPing Federate/OAuth2 supportMAPP NavigatorGBL-FBO-MAPP NAVIGATOR HYPERCAREDL-BTAMS-MAPP-Navigator@COMPANY.com (rarely respond)MAPP Nav issuesHarmony BitbucketGBL-CBT-GBI HARMONY SERVICESDL-GBI-Harmony-Support@COMPANY.comConfluence page:ATP Harmony Service SDConfluence, JiraGBL-DA-DEVSECOPS TOOLS SUPPORTDL-SESRM-ATLASSIAN-SUPPORT <DL-SESRM-ATLASSIAN-SUPPORT@COMPANY.com>ArtifactoryGBL-SESRM-ARTIFACTORY SUPPORTDL-SESRM-ARTIFACTORY-SUPPORT@COMPANY.comMule integration team supportDL-AIS Mule Integration Support DL-AIS-Mule-Integration-Support@COMPANY.comUsed to integrate with mule proxy VOD DCRLaurie.Koudstaal@COMPANY.comPOC if Veeva did not send an input file for the VOD DCR process for 24 hoursExample: there is a description how to request with https://digitalondemand.COMPANY.com/for a ticket assigned to one of groups above. Snowflake new schema/group/role creation"
},
{
"title": "Global Clients",
"pageID": "310963401",
"pageLink": "/display/GMDM/Global+Clients",
"content": "ClientContactCICRProbably AmishADTSDL-BTAMS-ENGAGE-PLUS@COMPANY.comEASIENGAGEESAMPLESSomya.Jain@COMPANY.com;Vijay.Bablani@COMPANY.com;Lori.Reynolds@COMPANY.comGANTGangadhar.Nadpolla@COMPANY.comGRACECory.Arthus@COMPANY.comGRVvikas.verma@COMPANY.com; Luther Chris <chris.luther@COMPANY.com>; Matej.Dolanc@COMPANY.comJOShweta.Kulkarni@COMPANY.comMAPDL-BT-Production-Engineering@COMPANY.com; Matej.Dolanc@COMPANY.comMAPPDL-BTAMS-MAPP-Navigator@COMPANY.com; Rajesh.K.Chengalpathy@COMPANY.comMEDICDL-F&BO-MEDIC@COMPANY.comMULEDL-AIS-Mule-Integration-Support@COMPANY.comAmish.Adhvaryu@COMPANY.comODSDL-GBI-PFORCERX_ODS_Support@COMPANY.comONEMEDMarsha.Wirtel@COMPANY.com;AnveshVedula.Chalapati@COMPANY.comPFORCEOLChristopher.Fani@COMPANY.comVEEVA_FIELDPFORCERXNagaJayakiran.Nagumothu@COMPANY.com;dl-pforcerx-support@COMPANY.comPTRSSagar.Bodala@COMPANY.com;bhushan.shanbhag@COMPANY.comJAPAN DWHDL-GDM-ServiceOps-Commercial_APAC@COMPANY.com DL-ATP-SERVICEOPS-JPN-DATALAKE@COMPANY.comCHINAChen, Yong <Yong.Chen@COMPANY.com>; QianRu.Zhou@COMPANY.comKOL_ONEVIEWDL-SFA-INF_Support_PforceOL@COMPANY.comSolanki,Hardik (US - Mumbai)<hsolanki@COMPANY.com>Yagnamurthy, Maanasa (US - Hyderabad) <myagnamurthy@COMPANY.com>NEXUS SriVeerendra.Chode@COMPANY.com;DL-Acc-GBICC-Team@COMPANY.comIMPROMPTUPRAWDOPODOBNIE AMISHCDWNarayanan, Abhilash <Abhilash.KadampanalNarayanan@COMPANY.com>Balan, Sakthi <Sakthi.Balan@COMPANY.com>Raman, Krishnan <Krishnan.Raman@COMPANY.com>ICUEBrahma, Bagmita <Bagmita.Brahma2@COMPANY.com>Solanki, Hardik <Hardik.Solanki@COMPANY.com>Tikyani, Devesh <Devesh.Tikyani@COMPANY.com>EVENTHUBSNOWFLAKEClientContactC360DL-C360_Support@COMPANY.comPT&EDL-PTE-Batch-Team@COMPANY.com>;  Drabold, Erich <Erich.Drabold@COMPANY.com>DQ_OPSmarkus.henriksson@COMPANY.com;dl-atp-dq-ops@COMPANY.comaccentureDL-Acc-GBICC-Team@COMPANY.comBig bossesPratap.Deshmukh@COMPANY.comMikhail.Komarov@COMPANY.comRafael.Aviles@COMPANY.com"
},
{
"title": "How to login to Service Manager",
"pageID": "218448126",
"pageLink": "/display/GMDM/How+to+login+to+Service+Manager",
"content": "How to add a user to Service Manager toolChoose link: https://smweb.COMPANY.com/SCAccountRequest.aspx#/searchFind yourselfClick "Next >>"Choose proper role: Service desk analyst and click „Needs training”When you have your training succeeded, there is a need to choose groups to which you want to be added :GBL-ADL-ATP GLOBAL MDM - HUB DEVOPSYou do it here:Please remember when you click “Add selected group to cart” there is a second approval step click: “SUBMIT”.When permissions will be granted you can explore Service Manager possibilities here: https://sma.COMPANY.com/sm/index.do"
},
{
"title": "How to Escalate btondemand Ticket Priority",
"pageID": "218448925",
"pageLink": "/display/GMDM/How+to+Escalate+btondemand+Ticket+Priority",
"content": "Below is a copy of: AWS Rapid Support → How to Escalate Ticket PriorityHow to Escalate Ticket PriorityTickets will be opened as low priority by default and response time will align to the restoration and resolution times listed in the SLA below. If your request priority needs to be change follow these instructions:Use the Chat function at BT On Demand (or call the Service Desk at 1-877-733-4357)Select Get SupportSelect "Click here to continue without selecting a ticket option."Select ChatProvide the existing ticket number you already openedAsk that ticket Priority be raised to Medium, High or Critical based on the issue and utilize one of the following key phrases to help set priority:Issue is Effecting Production ApplicationProduct Quality is being impactedBatch is unable to proceedLife safety or physical security is impactedDevelopment work stopped awaiting resolution"
},
{
"title": "How to get AWS Account ID",
"pageID": "218453784",
"pageLink": "/display/GMDM/How+to+get+AWS+Account+ID",
"content": "MDM Hub components are deployed in different AWS Accounts. In a ticket support process, you might be asked about the AWS Account ID of the host, load balancer, or other resources. You can get it quickly in at least two ways described below.Using AWS ConsoleIn AWS Console: http://awsprodv2.COMPANY.com/ (How to access AWS Console) you can find the Account ID in any resource's Amazon Resource Name (ARN).Using curlSSH to a host and run this curl command, same for all AWS accounts:[ec2-user@euw1z2pl116 ~]$ curl http://169.254.169.254/latest/dynamic/instance-identity/document{"accountId" : "432817204314","architecture" : "x86_64","availabilityZone" : "eu-west-1b","billingProducts" : null,"devpayProductCodes" : null,"marketplaceProductCodes" : null,"imageId" : "ami-05c4f918537788bab","instanceId" : "i-030e29a6e5aa27e38","instanceType" : "r5.2xlarge","kernelId" : null,"pendingTime" : "2021-12-21T06:07:12Z","privateIp" : "10.90.98.178","ramdiskId" : null,"region" : "eu-west-1","version" : "2017-09-30"}"
},
{
"title": "How to push Docker image to artifactory.COMPANY.com",
"pageID": "218458682",
"pageLink": "/display/GMDM/How+to+push+Docker+image+to+artifactory.COMPANY.com",
"content": "I am using the AKHQ image as an example.Login to artifactory.COMPANY.comLog in with COMPANY credentials: https://artifactory.COMPANY.com/artifactory/Generate Identity Token: https://artifactory.COMPANY.com/ui/admin/artifactory/user_profileUse COMPANY username and generated Identity Token in "docker login artifactory.COMPANY.com"marek@CF-19CHU8:~$ docker login artifactory.COMPANY.comAuthenticating with existing credentials...Login SucceededPull, tag, and pushmarek@CF-19CHU8:~$ docker pull tchiotludo/akhq:0.14.10.14.1: Pulling from tchiotludo/akhq...Digest: sha256:b7f21a6a60ed1e89e525f57d6f06f53bea6e15c087a64ae60197d9a220244e9cStatus: Downloaded newer image for tchiotludo/akhq:0.14.1docker.io/tchiotludo/akhq:0.14.1marek@CF-19CHU8:~$ docker tag tchiotludo/akhq:0.14.1 artifactory.COMPANY.com/mdmhub-docker-dev/tchiotludo/akhq:0.14.1marek@CF-19CHU8:~$ docker push artifactory.COMPANY.com/mdmhub-docker-dev/tchiotludo/akhq:0.14.1The push refers to repository [artifactory.COMPANY.com/mdmhub-docker-dev/tchiotludo/akhq]0.14.1: digest: sha256:b7f21a6a60ed1e89e525f57d6f06f53bea6e15c087a64ae60197d9a220244e9c size: 1577And that's all, you can now use this image from artifactory.COMPANY.com!"
},
{
"title": "Emergency contact list",
"pageID": "218459579",
"pageLink": "/display/GMDM/Emergency+contact+list",
"content": "In case of emergency please inform the person from the list attached to each environment.EMEA:Varganin, A.J. <Andrew.J.Varganin@COMPANY.com>; Trivedi, Nishith <Nishith.Trivedi@COMPANY.com>; Austin, John <John.Austin@COMPANY.com>; Simon, Veronica <Veronica.Simon@COMPANY.com>; Adhvaryu, Amish <Amish.Adhvaryu@COMPANY.com>; Kothandaraman, Sathyanarayanan <Sathyanarayanan.Kothandaraman@COMPANY.com>; Dolanc, Matej <Matej.Dolanc@COMPANY.com>; Kunchithapatham, Bhavanya <Bhavanya.Kunchithapatham@COMPANY.com>; Bhowmick, Aditya <Aditya.Bhowmick@COMPANY.com>GBL:TO-DOGBL US:TO-DOEMEA:TO-DOAMER:TO-DO"
},
{
"title": "How to handle issues reported to DL",
"pageID": "294665000",
"pageLink": "/display/GMDM/How+to+handle+issues+reported+to+DL",
"content": "Create a ticket in JiraName: "DL: {{ email title }}"Epic: BAUFix Version(s): BAUUse below template:MDM Hub Issue Response Template.oftReplace all the red placeholders. Fill in the table where you can, based on original email.Respond to the email, requesting additional details if any of the table rows could not be filled in.Update the ticket:Copy/Paste the filled tableAdjust the priority based on the "Business impact details" row"
},
{
"title": "Sample estimation for jira tickets",
"pageID": "415215566",
"pageLink": "/display/GMDM/Sample+estimation+for+jira+tickets",
"content": "1https://jira.COMPANY.com/browse/MR-8591(Disable keycloak by default)https://jira.COMPANY.com/browse/MR-8544(Investigate server git hooks in BitBucket)https://jira.COMPANY.com/browse/MR-8508(Lack of changelog when build from master)https://jira.COMPANY.com/browse/MR-8506(pvc-autoresizer deployment on PRODs)https://jira.COMPANY.com/browse/MR-8502(Dashboards adjustments)2https://jira.COMPANY.com/browse/MR-8649 (Move kong-mdm-external-oauth-plugin to mdm-utils repo)https://jira.COMPANY.com/browse/MR-8585 (Alert about not ready ScaledObject)https://jira.COMPANY.com/browse/MR-8539 (Reduce number of stored Cadvisor metrics and labels)https://jira.COMPANY.com/browse/MR-8531 (Old monitoring host decomissioning)https://jira.COMPANY.com/browse/MR-8375 (Quality Gateway: deploy publisher changes to PRODs)https://jira.COMPANY.com/browse/MR-8359 (Write article to describe Airflow upgrade procedure)https://jira.COMPANY.com/browse/MR-8166 (Fluentd - improve deployment time and downtime)https://jira.COMPANY.com/browse/MR-8128 (Turn on compression in reconciliation service)3https://jira.COMPANY.com/browse/MR-8543 (POC: Create local git hook with secrets verification)https://jira.COMPANY.com/browse/MR-8503 (Replace hardcoded rate intervals)https://jira.COMPANY.com/browse/MR-8370 (Investigate and plan fix for different version of monitoring CRDs)https://jira.COMPANY.com/browse/MR-8245 (Fluentbit: deploy NPRODs)https://jira.COMPANY.com/browse/MR-7926 (Move jenkins agents containers definition to inbound-services repo)5https://jira.COMPANY.com/browse/MR-8334 (Implement integration with Grafana)https://jira.COMPANY.com/browse/MR-7720 (Logstash - configuration creation and deployment)https://jira.COMPANY.com/browse/MR-7417 (Grafana dashboards backup process)https://jira.COMPANY.com/browse/MR-7075 (POC: Store transaction logs for 6 months)8https://jira.COMPANY.com/browse/MR-8258 (Implement integration with Kibana)https://jira.COMPANY.com/browse/MR-6285 (Prepare Kafka upgrade plan to version 3.3.2)https://jira.COMPANY.com/browse/MR-5981 (Process analysis)https://jira.COMPANY.com/browse/MR-5694 (Implement Reltio mock)https://jira.COMPANY.com/browse/MR-5835 (Mongo backup process: implement backup process)"
},
{
"title": "FAQ - Frequently Asked Questions",
"pageID": "415217275",
"pageLink": "/display/GMDM/FAQ+-+Frequently+Asked+Questions",
"content": ""
},
{
"title": "API",
"pageID": "415217277",
"pageLink": "/display/GMDM/API",
"content": "Is there an MDM Hub API Documentation?Of course - it is available for each component:Manager/API Router: https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-spec-emea-prod/swagger-ui/index.html?configUrl=/api-gw-spec-emea-prod/v3/api-docs/swagger-configBatch Service: https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-batch-spec-emea-prod/swagger-ui/index.html?configUrl=/api-batch-spec-emea-prod/v3/api-docs/swagger-configDCR Service: https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-dcr-spec-emea-prod/swagger-ui/index.htmlWhat is the difference between /api-emea-prod and /api-gw-emea-prod API endpoints?Both of these endpoints are leading to different API Components:/api-emea-prod is the API Router endpoint/api-gw-emea-prod is the Manager endpointBoth of these Components' APIs can be used in similar way. The main difference is:API Router allows routing DCR Requests to the DCR component: /api-emea-prod/dcr endpoint leads to the DCR Service API.API Router allows routing HCP/HCO Search requests to other Global MDM tenants, based on the search query filter's Country parameter.Example 1: We are trying to find HCPs named "John" in the US market. We can only use the EMEA HUB API:Sending an HTTP request:GET https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-emea-prod/entities?filter=equals(type, 'configuration/entityTypes/HCP') and equals(attributes.Country, 'US') and equals(attributes.FirstName, 'John')returns nothing, because we are using the /api-gw-emea-prod/* endpoint - the Manager. It is connected directly to the EMEA PROD Reltio, which does not contain the US market.Sending an HTTP request:GET https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-emea-prod/entities?filter=equals(type, 'configuration/entityTypes/HCP') and equals(attributes.Country, 'US') and equals(attributes.FirstName, 'John')routes the search to the GBLUS PROD Reltio, and returns results from there.Example 2: We are trying to find HCPs named "John" in the US, GB, IE and AU markets. We can only use the EMEA HUB API:Sending an HTTP request:GET https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-emea-prod/entities?filter=equals(type, 'configuration/entityTypes/HCP') and in(attributes.Country, 'US,GB,IE,AU') and equals(attributes.FirstName, 'John')searches for American, British, Irish or Australian HCPs in the EMEA PROD Reltio. Only Ireland is available in this tenant, so it returns results, but only limited to this marketSending an HTTP request:GET https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-emea-prod/entities?filter=equals(type, 'configuration/entityTypes/HCP') and in(attributes.Country, 'US,GB,IE,AU') and equals(attributes.FirstName, 'John')splits the search into three separate searches:- search for American HCPs in the GBLUS PROD Reltio- search for British or Irish HCPs in the EMEA PROD Reltio- search for Australian HCPs in the APAC PROD Reltioand returns aggregated results.What is the difference between /api-emea-prod and /ext-api-emea-prod API endpoints?These endpoints use different Authentication methods:when using /api-emea-prod you are using an API Key authentication. Your requests must contain the apikey header with the secret that you received from the Hub Support Team.when using /ext-api-emea-prod you are using an OAuth2 authentication. You must fetch your token from the COMPANY PingFederate and send it in your request's Authorization: Bearer header.It is recommended that all the API Users use OAuth2 and /ext-api-emea-prod endpoint, leaving Key Auth for support and debugging purposes.When should I use a GET Entity operation, when should I use a SEARCH Entity operation?There are two main ways of fetching an HCP/HCO JSON using HUB API:GET Entity:Sending GET /entities/{Reltio ID}It is the simplest and cheapest operation. Use it when you know the exact Reltio ID of the entity you want to find.SEARCH Entity:Sending GET /entities?filter=equals()...It allows finding one or more profiles by their attributes' values. Use it when you do not know the exact Reltio ID or do not know how many results you expect.Read more about Search filters here: https://docs.reltio.com/en/explore/get-going-with-apis-and-rocs-utilities/reltio-rest-apis/model-apis/entities-api/get-entity/filtering-entitiesBelow two requests correspond to each other:GET https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-emea-prod/entities/0TWPf9dGET https://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-emea-prod/entities?filter=equals(uri, 'entities/0TWPf9d')Although both are quick, Hub recommends only using the first one to find and entity by URI:GET Entity gets passed to Reltio as-is and results are returned straight awaySEARCH Entity gets analyzed on the Hub side first. If the search filter does not specify a country (a required parameter!), a full list of allowed countries is fetched from the API User's configuration and, as a result, the request may end up being sent to every single Reltio tenant.What is the difference between POST and PATCH /hcp, /hco, /entities operations?The key difference is:If we POST a record (crosswalk + attributes) to Hub, it is created in Reltio straight away:if the crosswalk already existed in Reltio, it gets overwrittenif the record already existed in Reltio, the attributes get completely overwritten:attribute values that did not exist in Reltio before, now are addedattributes that had different values in Reltio before, now are updatedattribute values that were present in Reltio before, but did not exist in the POSTed record, now are removedIf we PATCH a record (crosswalk + attributes) to Hub:we check whether this crosswalk already exists in Reltio. If it does not, we return an HTTP Bad Request error response.If the record already existed in Reltio, only the PATCHed subset of attributes is updated:attribute values that did not exist in Reltio before, now are addedattributes that had different values in Reltio before, now are updatedattribute values that were present in Reltio before, but did not exist in the PATCHed record, are left untouchedPOST should be used if we are sending the full JSON - crosswalk + all attributes.PATCH should be used if we are only sending incremental changes to a pre-existing profile."
},
{
"title": "Merging Into Existing Entities",
"pageID": "462075948",
"pageLink": "/display/GMDM/Merging+Into+Existing+Entities",
"content": "Can I post a profile and merge it to one already existing in MDM?Yes, there are 3 ways you can do that:Merge-On-The-FlyContributor MergeManual MergeMerge-On-The-Fly - DetailsMerge-on-the-fly is a Reltio mechanism using matchGroups configuration. MatchGroups contain lists of requirements that two entities must pass in order to be merged. There are two types of matchGroups: "suspect" and "automatic". Suspects merely display as potential matches in Reltio UI, but Automatic groups trigger automatic merges of the objects.Example of an HCP automatic matchGroup from Reltio's configuration (EMEA PROD):\n {\n "uri": "configuration/entityTypes/HCP/matchGroups/ExctONEKEYID",\n "label": "(iii) Auto Rule - Exact Source Unique Identifier(ReferBack ID)",\n "type": "automatic",\n "useOvOnly": "true",\n "rule": {\n "and": {\n "exact": [\n "configuration/entityTypes/HCP/attributes/Identifiers/attributes/ID",\n "configuration/entityTypes/HCP/attributes/Country"\n ],\n "in": [\n {\n "values": [\n "OneKey ID"\n ],\n "uri": "configuration/entityTypes/HCP/attributes/Identifiers/attributes/Type"\n },\n {\n "values": [\n "ONEKEY"\n ],\n "uri": "configuration/entityTypes/HCP/attributes/OriginalSourceName"\n },\n {\n "values": [\n "Yes"\n ],\n "uri": "configuration/entityTypes/HCP/attributes/Identifiers/attributes/Trust"\n }\n ]\n }\n },\n "scoreStandalone": 100,\n "scoreIncremental": 0\n \nAbove example merges two entities having same Country attribute and same Identifier of type "OneKey ID". Identifier must have the Trusted flag and the OriginalSourceName must be "ONEKEY".When posting a record to MDM, matchGroups are evaluated. If an automatic matchGroup is matched, Reltio will perform a Merge-On-The-Fly, adding the posted crosswalk to an existing profile.Contributor Merge - DetailsWhen posting an object to Reltio, we can use its Crosswalk contributorProvider/dataProvider mechanism to bind posted crosswalk to an existing one.If we know that a crosswalk exists in MDM, we can add it to the crosswalks array with contributorProvider=true and dataProvider=false flags. Crosswalk marked like that serves as an indicator of an object to bind to.The other crosswalk must have the flags set the other way around: contributorProvider=false and dataProvider=true. This is the crosswalk that will de facto provide the attributes and be considered for the Hub's ingestion rules.Example - we are sending data with an MAPP crosswalk and binding that crosswalk to the existing ONEKEY crosswalk:\n{\n "hcp": {\n "type": "configuration/entityTypes/HCP",\n "attributes": {\n "FirstName": [\n {\n "value": "John"\n }\n ],\n "LastName": [\n {\n "value": "Doe"\n }\n ],\n "Country": [\n {\n "value": "ES"\n }\n ]\n },\n "crosswalks": [\n {\n "type": "configuration/sources/MAPP",\n "value": "B53DFCEA-8231-E444-24F8-7E72C62C0147",\n "contributorProvider": false,\n "dataProvider": true\n },\n {\n "type": "configuration/sources/ONEKEY",\n "value": "WESR04566503",\n "contributorProvider": true,\n "dataProvider": false\n }\n ]\n }\n}\nEvery MDM record also has a crosswalk of type "Reltio" and value equal to Reltio ID. We can use that to bind our record to the entity:\n{\n "hcp": {\n "type": "configuration/entityTypes/HCP",\n "attributes": {\n "FirstName": [\n {\n "value": "John"\n }\n ],\n "LastName": [\n {\n "value": "Doe"\n }\n ],\n "Country": [\n {\n "value": "ES"\n }\n ]\n },\n "crosswalks": [\n {\n "type": "configuration/sources/MAPP",\n "value": "B53DFCEA-8231-E444-24F8-7E72C62C0147",\n "contributorProvider": false,\n "dataProvider": true\n },\n {\n "type": "configuration/sources/Reltio",\n "value": "00TnuTu",\n "contributorProvider": true,\n "dataProvider": false\n }\n ]\n }\n}\nThis approach has a downside: crosswalks are bound, so they cannot be unmerged later on.Manual Merge - DetailsLast approach is simply creating a record in Reltio and straight away merging it with another.Let's use the previous example. First, we are simply posting the MAPP data:\n{\n "hcp": {\n "type": "configuration/entityTypes/HCP",\n "attributes": {\n "FirstName": [\n {\n "value": "John"\n }\n ],\n "LastName": [\n {\n "value": "Doe"\n }\n ],\n "Country": [\n {\n "value": "ES"\n }\n ]\n },\n "crosswalks": [\n {\n "type": "configuration/sources/MAPP",\n "value": "B53DFCEA-8231-E444-24F8-7E72C62C0147"\n }\n ]\n }\n}\nResponse:\n{\n "uri": "entities/0zu5sHM",\n "status": "created",\n "errorCode": null,\n "errorMessage": null,\n "COMPANYGlobalCustomerID": "04-131155084",\n "crosswalk": {\n "type": "configuration/sources/MAPP",\n "value": "B53DFCEA-8231-E444-24F8-7E72C62C0147",\n "updateDate": 1728043082037,\n "deleteDate": ""\n }\n}\nWe can now use the URI from response to merge the new record into existing one:\nPOST /entities/0zu5sHM/_merge?uri=00TnuTu\n"
},
{
"title": "Quality rules",
"pageID": "164470090",
"pageLink": "/display/GMDM/Quality+rules",
"content": "Quality engine is responsible for preprocessing Entity when a specific precondition is met. This engine is started in the following cases:Rest operation (POST/PATCH) on /hco endpoint on MDM ManagerRest operation (POST/PATCH) on /hcp endpoint on MDM ManagerWhen a validationOn parameter is set to true the first step in HCP/HCO request processing is quality engine validation. MDM Manager Configuration should contain the following quality rules:hcpQualityRulesConfigshcoQualityRulesConfigshcpAffiliatedHCOsQualityRulesConfigsThese properties are able to accept a list of yaml files. Each file has to be added in environment repository in /config_files/<env_name>/mdm_mananger/config/.*quality-rules.yaml. Then each of these files has to be added to these variables in inventory /<env_name>/group_vars/gw-services/mdm_manager.yml. For HCP request processing, files are loaded in the following order:hcpQualityRulesConfigshcpAffiliatedHCOsQualityRulesConfigsFor HCO request processing, files are loaded only from the following configuration:hcoQualityRulesConfigsIt is a good practice to divide files in a common logic and a specific logic for countries. For example, HCP Quality Rules file names should have the following structure:hcp/hcp/affiliatedhco | common/country-* | quality-rules.yamlhcp-common-quality-rules.yamlhcp-country-china-quality-rules.yamlQuality rules yaml file is a set of rules, which will be applied on Entity. Each rule should have the following yaml structure: preconditionsmatch the condition is met when the attribute matches the pattern or string value provided in values' list. e.g. source the condition is met when the crosswalk type ends with the values provided in the list. e.g. default (Empty)/Default value for precondition is "True" value. The preconditions section in yaml file is not required.checkmandatory this type of check evaluates if the attribute is mandatory. When the check is correctly evaluated, then the action will be performed. e.g.mandatoryGroup this check will pass when all attributes provided in the list will not be empty. e.g. mandatoryArray this check will pass when the array provided in the list will contain at least minimum number of values. e.g. actionWhen the precondition and check are properly evaluated then a specific action can be invoked on entity attributes.clean this action replaces attribute values which match the specific pattern with the value from replacement parameter. e.g. reject this action rejects the entity when the precondition is met. e.g. remove- based on the madatoryGroup attributes list, this action removes these attributes from entity. e.g. set this action sets the value provided in parameter on the specific attribute. e.g. modify this action sets the value on the specific attribute based on attributes in entity. To reference entity's attributes, use curly braces {}. This rule adds country prefix for each element in specialties array. e.g. chineseNamesToEnglish this action translates the attribute from source (Chinese) to target attribute (English). e.g. addressDigest this action counts MD5 based on Address attributes and creates Crosswalk for MD5 digest. e.g. autofillSourceName - this action adds SourceName if it not exists to given attributeaction: type: autofillSourceName attribute: AddressesThe logic of the quality engine rule check is as follows:The precondition is checked (if precondition section is not defined, then the default value is True)Then the check is evaluated on specified Entity (if check section is not defined, then by default the action will be executed without check evaluating)If the check will return attributes to process, then the action is executed.Quality rules DOC: "
},
{
"title": "Relation replacer",
"pageID": "164470095",
"pageLink": "/display/GMDM/Relation+replacer",
"content": "After getRelation operation is invoked, "Relation Replacer" feature can be activated on returned relation entity object. When entity is merged, Reltio sometimes does not replace objectUri id with new updated value. This process will detect such situation and replace objectUri with correct URI from crosswalk. Relation replacer process operates under the following conditions:Relation replacer will check EndObject and StartObject sections.When objectUri is different from each entity id from crosswalks section, then objectURI is replaced with entity id from crosswalks.When crosswalks contain multiple entries in list and there is a situation that crosswalks list contains different entity uri, relation replacer process ends with the following warning: "Object has more than one possible uri to replace" it is not possible to decide which entity should be pointed as StartObject or EndObject after merge."
},
{
"title": "SMTP server",
"pageID": "387170360",
"pageLink": "/display/GMDM/SMTP+server",
"content": "Access to SMTP server is granted for each region separately:AMERDestination Host: amersmtp.COMPANY.comDestination SMTP Port: 25Authentication: NONEEMEADestination Host: emeasmtp.COMPANY.comDestination SMTP Port: 25Authentication: NONEAPACDestination Host: apacsmtp.COMPANY.comDestination SMTP Port: 25Authentication: NONETo request access to SMTP server there is need to fill in the SMTP relay registration form through http://ecmi.COMPANY.com portal."
},
{
"title": "Airflow",
"pageID": "218432163",
"pageLink": "/display/GMDM/Airflow",
"content": ""
},
{
"title": "Overview",
"pageID": "218432165",
"pageLink": "/display/GMDM/Overview",
"content": "ConfigurationAirflow is deployed on kubernetes cluster using official airflow helm chart:Github repositoryDocumentationAirflow DockerfileMain airflow chart adjustments(creting pvc's, k8s jobs, etc.) are located in components repository.Environment's specific configuration is located in cluster configuration repository.DeploymentLocal deploymentAirflow can be easily deployed on local kubernetes cluster for testing purposes. All you have to do is:If deployment is performed on windows machine please make sure that install.sh, encrypt.sh, decrypt.sh and .config files have unix line endings. Otherwise it will cause deployment errors.Edit .config file to enable airflow deployment(and any other component you want. To enable component it needs to have assigned value greater than 0\nenable_airflow=1\nRun ./install.sh file located in main helm directory\n./install.sh\nEnvironment deploymentEnvironment deployment should be performed with great care.If deployment is performed on windows machine please make sure that install.sh, encrypt.sh, decrypt.sh and .config files have unix line endings. Otherwise it will cause deployment errors.Environment deployemnt can be performed after connecting local machine to remote kubernetes cluster.Prepare airflow configuration in cluster env repository.Adjust .config file to update airflow(and any other service you want)\nenable_airflow=1\nRun ./install.sh script to update kuberntes clusterCheck if all airflow pods are working correctlyHelm chart configurationYou can find described available configuration in values.yaml file in airflow github repository.Helm chart adjustmentsAdditionally to base airflow kubernetes resources there are created:Kubernetes job used to create additional usersPersistent volume claim for airflow dags data(for each prod/nonprod tenant)Secrets from .Values.secretsWebserver ingressDefinitions: helm templatesDags deploymentDags are deployed using ansible playbook: install_mdmgw_airflow_services_k8s.ymlPlaybook uses kubectl command to work with airflow pods.You can run this playbook locally:To modify lists of dags that should be deployed during playbook run you have to adjust airflow_components list:e.g.\nairflow_components:\n - lookup_values_export_to_s3\nRun playbook(adjust environment)e.g.\nansible-playbook install_mdmgw_airflow_services.yml -i inventory/emea_dev/inventory\nOr with jenkins job:https://jenkins-gbicomcloud.COMPANY.com/job/MDM_Airflow_Deploy_jobs/"
},
{
"title": "Airflow DAGs",
"pageID": "164470169",
"pageLink": "/display/GMDM/Airflow+DAGs",
"content": ""
},
{
"title": "●●●●●●●●●●●●●●● [https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1589274]",
"pageID": "310943460",
"pageLink": "/pages/viewpage.action?pageId=310943460",
"content": "DescriptionDag used to prepare data from FLEX(US) tenant to be lodaed into  GBLUS tenant.S3 kafka connector on FLEX enironment uploads files everyday to s3 bucket as multiple small files. This dag takes those multiple files and concatenate them into one. ETL team downloads this concatenated file from s3 bucket and upload it into GBLUS tenant via batch service.Examplehttps://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=concat_s3_files_gblus_prod"
},
{
"title": "active_hcp_ids_report",
"pageID": "310939877",
"pageLink": "/display/GMDM/active_hcp_ids_report",
"content": "DescriptionGenerates report of active hcp's from defined countries.Examplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=active_hcp_ids_report_emea_prodStepsCreate mongo collection from query on entity_history collectionExport collection to excel formatExport report to s3 directory"
},
{
"title": "China reports",
"pageID": "310939879",
"pageLink": "/display/GMDM/China+reports",
"content": "DescriptionSet of dags that produces china reports on gbl environment that are later sent via email:Single reports are generated by executing the defined queries on mongo, then extracts are published on s3. Then main dags download exports from s3 and send an email with all reports.Main dag example:Report generating dag example:Dags listDags executed every day:china_generate_reports_gbl_prod - main dag that triggers the restchina_affiliation_status_report_gbl_prodchina_dcr_statistics_report_gbl_prodchina_hcp_by_source_report_gbl_prodchina_import_and_gen_dcr_statistics_report_gbl_prodchina_import_and_gen_merge_report_gbl_prodchina_merge_report_gbl_prodDags executed weekly:china_monthly_generate_reports_gbl_prod - main dag that triggers the rest china_monthly_hcp_by_channel_report_gbl_prodchina_monthly_hcp_by_city_type_report_gbl_prodchina_monthly_hcp_by_department_report_gbl_prodchina_monthly_hcp_by_gender_report_gbl_prodchina_monthly_hcp_by_hospital_class_report_gbl_prodchina_monthly_hcp_by_province_report_gbl_prodchina_monthly_hcp_by_source_report_gbl_prodchina_monthly_hcp_by_SubTypeCode_report_gbl_prodchina_total_entities_report_gbl_prod"
},
{
"title": "clear_batch_service_cache",
"pageID": "333156979",
"pageLink": "/display/GMDM/clear_batch_service_cache",
"content": "DescriptionThis dag is used to clear batch-service cache(mongo batchEntityProcessStatus collection). It deletes all records specified in csv file for specified batchName.To clear cache batch-service batchController/{batch_name}/_clearCache endpoint is used.Dag used by mdmhub hub-ui.Input parameters:batchNamefileName\n{\n "fileName": "inputFile.csv",\n "batchName": "testBatchTAGS"\n}\nMain stepsDownload input file from s3 directorySplits the file so that is has maximum of $partSize recordsExecutes request to batch-service batchController/{batch_name}/_clearCacheMove input file to s3 archive directoryDeletes temporary workspace from pvcprint report with information how many records have been deleted \n{'removedRecords': 1}\n\nExamplehttps://airflow-amer-nprod-gbl-mdm-hub.COMPANY.com/graph?dag_id=clear_batch_service_cache_amer_dev&root="
},
{
"title": "distribute_nucleus_extract",
"pageID": "310939886",
"pageLink": "/display/GMDM/distribute_nucleus_extract",
"content": "DEPRECATEDDescriptionDistributes extracts that are sent by nucleus to s3 directory between multiple directories for the respective countries that are later used by inc_batch_* dagsInput and output directories are configured in dags configuration file:Dag:https://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=distribute_nucleus_extract_gbl_prod&root="
},
{
"title": "export_merges_from_reltio_to_s3",
"pageID": "310939888",
"pageLink": "/display/GMDM/export_merges_from_reltio_to_s3",
"content": "DescriptionDag used to schedule Reltio merges export, adjust file format and then uload file to s3 snowflake directory.Steps:Clearing workspace after previous runCalculating time range for incremental loads. For full exports(eg. export_merges_from_reltio_to_s3_full_emea_prod) this step sets start and end date as None. This way full extract is produced. For incremental loads start and end dates are calculated using last_days_count variableScheduling reltio exportWaiting for reltio export file(s3 sensor).Postprocessing fileUpload file to snowflake directoryExamplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=export_merges_from_reltio_to_s3_full_emea_prod"
},
{
"title": "get_rx_audit_files",
"pageID": "310943418",
"pageLink": "/display/GMDM/get_rx_audit_files",
"content": "DescriptionDownload rx_audit files from:SFTP server(external)s3 directory(internal - constant)Files are the uploaded to defined s3 directory that is later used by inc_batch_rx_audit dag.Examplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=inc_batch_rx_audit_gbl_prodUseful linksRX_AUDIT"
},
{
"title": "historical_inactive",
"pageID": "310943421",
"pageLink": "/display/GMDM/historical_inactive",
"content": "DescriptionDag used to implement history inactive processSteps:Download csv file with crosswalks of entities to recreateRecreate entities and upload to s3 directory as json fileTrigger snowflake stored procedureExamplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=historical_inactive_emea_prodReferenceSnowflake: History Inactive"
},
{
"title": "hldcr_reconciliation",
"pageID": "310943423",
"pageLink": "/display/GMDM/hldcr_reconciliation",
"content": "DescriptionHL DCR flow occasionally blocked some VRs' statuses from being sent to PforceRx in an outbound file, because Hub has not received an event from Reltio, informing about Change Request resolution. The exact event expected is CHANGE_REQUEST_CHANGED.To prevent the above, HLDCR Reconciliation process runs regularly, doing the following steps:Query MongoDB store (Collection DCRRequests) for VRs in CREATED status. Export result as list.For each VR from the list, generate a CHANGE_REQUEST_CHANGED event and post it to Kafka.Further processing is as usual - DCR Service enriches the event with current changeRequest state. If the changeRequest has been resolved, it updates the status in MongoDB.Examplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hldcr_reconciliation_gbl_prod"
},
{
"title": "HUB Reconciliation process",
"pageID": "164470182",
"pageLink": "/display/GMDM/HUB+Reconciliation+process",
"content": "The reconciliation process was created to synchronize Reltio with HUB. Because Reltio sometimes does not generate events, and therefore these events are not consumed by HUB from the SQS queue and the HUB platform is out of sync with Reltio data. External Clients dose not receive the required changes, which cause that multiple systems are not consistent. To solve this problem this process was designed. The fully automated reconciliation process generates these missing events. Then these events are sent to the inbound Kafka topic, HUB platform process these events, updates mongo collection and route the events to external Clients topics.AirflowThe following diagram presents the reconciliation process steps:This directed acyclic diagram presents the steps that are taken to compare Reltio and HUB and produce the missing events. This diagram is divided into the following sections:Initialization and Reltio Data preparation - in this section the process invokes the Reltio export, and upload full export to mongo.clean_dirs_before_init, init_dirs, timestamp these 3 tasks are responsible for the directory structure preparation required in the further steps and timestamp capture required for the reconciliation process. Reltio and HUB data changes in time and the export is made at a specific point in time. We need to ensure that during comparison only entities that were changed before Reltio Export are compared. This requirement guarantee that only correct events are generated and consistent data is compared. entities_export the task invokes the Reltio Export API and triggers the export job in Reltio sensor_s3_reltio_file this task is an S3 bucket sensor. Because the Reltio export job is an asynchronous task running in the background, the file sensor checks the S3 location hub_reconciliation/<ENV>/RELTIO/inbound/ and waits for export. When the success criteria are met, the process exits with success. The timeout for this job is set to 24 hours, the poke interval is set to 10 minutes. download_reltio_s3_file, unzip_reltio_export, mongo_import_json_array, generate_mongo_indexes these 4 tasks are invoked after successful export generation. Zip is downloaded and extracted to the JSON file, then this file is uploaded to mongo collection. The generate_mongo_indexes task is responsible for generating mongo indexes in the newly uploaded collection. The indexes are created to optimize performance. archive_flex_s3_file_name After successful mongo import Reltio export is archived for future reference. HUB validation - Reltio ↔ HUB comparison - the main comparison and events generation logic is invoked in this SUB DAG. The details are described in the section below. Events generation  - after data comparison, generated events are sent to selected Kafka topic.Then standard events processing begins. The details are described in HUB documentation.Please check the following documents to find more details: Entity change events processing (Reltio)Event filtering and routing rulesProcessing events on client sideHUB validation - Reltio ↔ HUB comparisonThis directed acyclic diagram (SUB DAG) presents the steps that are taken to compare HUB and Reltio data in both directions. Because Reltio data is already uploaded and HUB (“entityHistory”) collection is always available we can immediately start the comparison process. mongo_find_reltio_hub_differnces - this process compares Reltio data to HUB data.  Mongo aggregation pipeline matches the entities from Reltio export to HUB profiles located in mongo collection by entity URI (ID). All Reltio profiles that are not presented in Reltio export data are marked as missing. All attributes in Reltio are compared to HUB profile attributes - based on this when the difference is found, it means that the profile is out of sync and new even should be generated. Based on these changes the HCP_CHANGED or HCO_CHANGED events are generated.When the profile is missing the HCP_CREATED or HCO_CREATED events are generated. mongo_find_hub_reltio_differnces - this process compares HUB entities to Reltio data. The process is designed to find only missing entities in Reltio, based on these changes the HCP_REMOVED or HCO_REMOVED events are generatedMongo aggregation pipeline matches the entities from HUB mongo collection to Reltio profiles by entity URI (ID). All HUB profiles that are not presented in Reltio export data are marked as missing for future reference. mongo_generate_hub_events_differences - this task is related to the automated reconciliation process. The full process is described in this paragraph.Configuration and schedulingThe process can be started in Airflow on demand. The configuration for this process is stored in the MDM Environment configuration repository. The following section is responsible for the HUB Reconciliation process activation on the selected environment:\nactive_dags:\n gbl_dev:\n - hub_reconciliation.py\nThe file is available in "inventory/scheduler/group_vars/all/all.yml"To activate the Reconciliation process on the new environment the new environment should be added to "active_dags" map.Then the "ansible-playbook install_airflow_dags.yml" needs to be invoked. After this new process is ready for use in Airflow. Reconciliation process To synchronize Reltio with HUB and therefore synchronize profiles in Reltio with external Clients the fully automated process is started after full HUB<->Reltio comparison. this is the "mongo_generate_hub_events_differences" task. The automated reconciliation process generates events. Then these events are sent to the inbound Kafka topic, HUB platform process these events, updates mongo collection and route the events to flex topic.The following diagram presents the reconciliation steps:Automated reconciliation process generates events:The following events are generated during this process:HCO_CHANGED / HCP_CHANGED - In this case, Reltio has not generated ENTITY_CHANGED event for the entityBased on Reltio to HUB comparison, when the comparison result contains ATTRIBUTE_VALUE_MISSING or ATTRIBUTE_VALUE_DIFFERENT for the entity the event is generated.The events are aggregated based on URI so only one change event for the selected entity is generatedHCO_CREATED / HCP_CHANGED - In this case, Reltio has not generated ENTITY_CREATED event for the entityBased on Reltio to HUB comparison when the comparison result contains ENTITY_MISSING difference the create event is generated. It means that Reltio contains the entity and this entity is missing HUB mongo collection, so there is a need to generate and send missing CREATED events.HCO_REMVED - In this case, Reltio has not generated ENTITY_REMOVED event for the entityBased on HUB to Reltio comparison when the comparison result contains ENTITY_MISSING difference the delete event is generated. It means that the HUB cache contains an additional entity that was deactivated/removed from the Reltio system, so there is a need to generate and send the missing REMOVED events.HCO_MERGED and HCO_LOST_MERGE - In this case, Reltio has not generated an ENTITY_MERGED event for the winner entity and ENTITY_LOST_MERGE for the looser entity.Based on Reltio extracted data and HUB mongo cache these events are generated.Entities from source Reltio data are matched by crosswalk value with EntityHistory Mongo data.When Reltio entity URI does not match the Mongo Entity URI and Reltio does not contain entity presented in Mongo and data that was matched by crosswalk value, it means that this entity was merged in Reltio.Then MERGED and LOST_MERGE event is generated for these entities.2. Next, Event Publisher receives events from the internal Kafka topic and calls MDM Gateway API to retrieve the latest state of Entity from Reltio. Entity data in JSON is added to the event to form a full event. For REMOVED events, where Entity data is by definition not available in Reltio at the time of the event, Event Publisher fetches the cached Entity data from Mongo database instead.3. Event Publisher extracts the metadata from Entity (type, country of origin, source system).4. Entity data is stored in the MongoDB database, for later use5. For every Reltio event, there are two Publishing Hub events created: one in Simple mode and one in Event Sourcing (full) mode. Based on the metadata, and Routing Rules provided as a part of application configuration, the list of the target destinations for those events is created. The event is sent to all matched destinations to the target topic (<env>-out-full-<client>) when the event type is full or (<env>-out-simple-<client>) when the event type is simple. "
},
{
"title": "HUB Reconciliation Process V2",
"pageID": "164470184",
"pageLink": "/display/GMDM/HUB+Reconciliation+Process+V2",
"content": "Hub reconciliation process is starting from downloading reconciliation.properties file with following information:reconciliationType - reconciliation type - possible values: FULL_RECONCILIATION or PARTIAL_RECONCILIATION (since last run)eventType - event type - it is used in in generating events for kafka - possible values: FULL or CROSSWALK_ONLYreconcileEntities - if set to true entities will be reconciliatedreconcileRelations - if set to true relations will be reconciliatedreconcileMergeTree - if set to true mergeTree will be reconciliatedSets hub reconciliation properties in the processIf reconcileEntities is set to true that process for reconciliate entities is started<entities_get_last_timestamp> Process gets last timestamp when entities was lately exported<entities_export> Entities export is triggered from Reltio - this step is done by groovy script<entities_export_sensor> Process is checking if export is finished by verifing if the file SUCCESS with manifest.json exists on S3 folder /us/<env>/inboud/hub/hub_reconciliation/entities/inbound/entities_export_<timestamp><entities_set_last_timestamp> In this step process is setting timestamp for future reconciliation of entities - it is set in airflow variables<entities_generate_hub_reconciliation_events> this step is responsible for checking which entities has been changed and generate events for changed entitiesfirstly we get export file from S3 folder /us/<env>/inboud/hub/hub_reconciliation/entities/inbound/entities_export_<timestamp>we unzip the file in bash scriptfor the unzipped file we there are two optionsif we useChecksum than calculateChecksum groovy script is executed which calculates checksum for exported entities and generates ReconciliationEvent only with checksumif we don't useChecksum than ReconciliationEvent is generated with whole entityin the last step we send those generated events to specified kafka topics Events from topic will be processed by reconciliation serviceReconciliation service is checking basing on checksum change/changes if PublisherEvent should be generated it compares checksum if it exists from ReconciliationEvent with the one that we have in entityHistory tableit compares entity objects from ReconciliationEvent with the one that we have in mongo in entityHistory table if checksum is absent - objects on both sides are normalized before compare processit compares SimpleCrosswalkOnlyEntity objects if CROSSWALK_ONLY reconciliation event type is choosen<entities_export_archive> - move export folder on S3 from inbound to archive folder4. If reconcileRelations is set to true that process for reconciliate relations is started<relations_get_last_timestamp> Process gets last timestamp when relations was lately exported<relations_export> Relations export is triggered from Reltio - this step is done by groovy script<relations_export_sensor> Process is checking if export is finished by verifing if the file SUCCESS with manifest.json exists on S3 folder /us/<env>/inboud/hub/hub_reconciliation/relations/inbound/relations_export_<timestamp><relations_set_last_timestamp> In this step process is setting timestamp for future reconciliation of relations - it is set in airflow variables<relations_generate_hub_reconciliation_events> this step is responsible for checking which relations has been changed and generate events for changed relationsfirstly we get export file from S3 folder /us/<env>/inboud/hub/hub_reconciliation/relations/inbound/relations_export_<timestamp>we unzip the file in bash scriptfor the unzipped file we there are two optionsif we useChecksum than calculateChecksum groovy script is executed which calculates checksum for exported relations and generates ReconciliationEvent only with checksumif we don't useChecksum than ReconciliationEvent is generated with whole relationin the last step we send those generated events to specified kafka topic Events from topic will be processed by reconciliation serviceReconciliation service is checking basing on checksum change/object changes if PublisherEvent should be generated it compares checksum if it exists from ReconciliationEvent with the one that we have in mongo in entityRelation tableit compares relation objects from ReconciliationEvent with the one that we have in mongo in entityRelation table if checksum is absent - objects on both sides are normalized before compare processit compares SimpleCrosswalkOnlyRelation objects if CROSSWALK_ONLY reconciliation event type is choosen<relations_export_archive> - move export folder on S3 from inbound to archive folder5. If reconcileMergeTree is set to true that process for reconciliate relations is started<merge_tree_get_last_timestamp> Process gets last timestamp when merge tree was lately exported<merge_tree_export> Merge tree export is triggered from Reltio - this step is done by groovy script<merge_tree_export_sensor> Process is checking if export is finished by verifing if the file SUCCESS with manifest.json exists on S3 folder /us/<env>/inboud/hub/hub_reconciliation/merge_tree/inbound/merge_tree_export_<timestamp><merge_tree_set_last_timestamp> In this step process is setting timestamp for future reconciliation of merge tree - it is set in airflow variables<merge_tree_generate_hub_reconciliation_events> this step is responsible for checking which merge tree has been changed and generate events for changed merge tree objectsfirstly we get export file from S3 folder /us/<env>/inboud/hub/hub_reconciliation/merge_tree/inbound/merge_tree_export_<timestamp>we unzip the file in bash scriptfor the unzipped file we there are two optionsif we useChecksum than calculateChecksum groovy script is executed which creates ReconciliationMergeEvent with uri of the main object and list of loosers uriif we don't useChecksum than ReconciliationEvent is generated with whole merge tree objectin the last step we send those generated events to specified kafka topic Events from topic will be processed by reconciliation serviceReconciliation service is sending merge and lost_merger PublisherEvent for winner and every looser<merge_tree_export_archive> - move export folder on S3 from inbound to archive folder"
},
{
"title": "import_merges_from_reltio",
"pageID": "310943426",
"pageLink": "/display/GMDM/import_merges_from_reltio",
"content": "DescriptionSchedules reltio merges export, and imports it into mong.This dag is scheduled by china_import_and_gen_merge_report and data imported into mongo are used by china_merge_report to generate china raport filesExamplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=import_merges_from_reltio_gbl_prod&root=&num_runs=25&base_date=2023-04-06T00%3A05%3A20Z"
},
{
"title": "import_pfdcr_from_reltio",
"pageID": "310943428",
"pageLink": "/display/GMDM/import_pfdcr_from_reltio",
"content": "DescriptionSchedules reltio entities export, download it from s3, make small changes in export and import into mongo.This dag is scheduled by china_import_and_gen_dcr_statistics_report and data imported into mongo is used by china_dcr_statistics_report to generate china raport filesExamplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=import_pfdcr_from_reltio_gbl_prod"
},
{
"title": "inc_batch",
"pageID": "310943432",
"pageLink": "/display/GMDM/inc_batch",
"content": "DescriptionProces used to load idl files stored on s3 into Reltio. This dags is basing on mdmhub inc_batch_channel component.StepsCrate batch instance in mongo using batch-service /batchController endpointDownload idl files from s3 directoryExtract compressed archivesPreprocess files(eg. dos2unix )Run inc_batch_channel componentArchive input files and reportsExamplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=inc_batch_sap_gbl_prod"
},
{
"title": "Initial events generation process",
"pageID": "164470083",
"pageLink": "/display/GMDM/Initial+events+generation+process",
"content": "Newly connected clients doesn't have konwledge about entities which was created in MDM before theirs connecting. Due to this the initial event loading process was designed. Process loads events about already existing entities to client's kafka topic. Thanks this the new client is synced with MDM.AirflowThe process was implemented as Airflow's DAG:Process steps:prepareWorkingDir - prepares directories structure required for the process,getLastTimestamp - gets time marked of last process execution. This marker is used to determine which of events has been sent by previously running process. If the process is run first time the marker has always 0 value,getTimestamp - gets current time marker,generatesEvents - generates events file based on current Mongo state. Data used to prepare event messages is selected based on condition entity.lastModificationDate > lastTimestamp,divEventsByEventKind - divides events file based on event kind: simple or full,loadFullEvents* - it is a group of steps that populates full events to specific topic. The amount of this steps is based on amount of topics specified in configuration,loadSimpleEvents* - similar to above, those steps populates simple events to specific topic. The amount of this steps is based on amount of topics specified in configuration,setLastTimestamp - save current time marker. It will be used in the next process execution as last time marker.Configuration and schedulingThe process can be started on demand.The Process's configuration is stored in the MDM Environment configuration repository.To enable the process on specific environment:Its should be valid with template "generate_events_for_[client name]" and added to the list "airflow_components" which is defined in "inventory/[env name]/group_vars/gw-airflow-services/all.yml" file,Create configuration file in "inventory/[env name]/group_vars/gw-airflow-services/generate_events_for_[client name].yml" with content as below:The process configuration\n---\n\ngenerate_events_for_test_name: "generate_events_for_test" #Process name. It has to be the same as in "airflow_components" list avaiable in all.yml\ngenerate_events_for_test_base_dir: "{{ install_base_dir }}/{{ generate_events_for_test_name }}"\ngenerate_events_for_test:\n dag: #Airflow's DAG configuration section\n template: "generate_events.py" #do not change\n variables:\n DOCKER_URL: "tcp://euw1z1dl039.COMPANY.com:2376" #do not change\n dataDir: "{{ generate_events_for_test_base_dir }}/data" #do not change\n configDir: "{{ generate_events_for_test_base_dir }}/config" #do not change\n logDir: "{{ generate_events_for_test_base_dir }}/log" #do not change\n tmpDir: "{{ generate_events_for_test_base_dir }}/tmp" #do not change\n user:\n id: "7000" #do not change\n name: "mdm" #do not change\n groupId: "1002" #do not change\n groupName: "docker" #do not change\n mongo: #mongo configuration properties\n host: "localhost"\n port: "27017"\n user: "mdm_gw"\n password: "{{ secret_generate_events_for_test.dag.variables.mongo.password }}" #password is taken from the secret.yml file\n authDB: "reltio"\n kafka: #kafka configuration properties\n username: "hub"\n password: "{{ secret_generate_events_for_test.dag.variables.kafka.password }}" #password is taken from the secret.yml file\n servers: "10.192.71.136:9094"\n properties:\n "security.protocol": SASL_SSL\n "sasl.mechanism": PLAIN\n "ssl.truststore.location": /opt/kafka_utils/config/kafka_truststore.jks\n "ssl.truststore.password": "{{ secret_generate_events_for_test.dag.variables.kafka.properties.sslTruststorePassword }}" #password is taken from the secret.yml file\n "ssl.endpoint.identification.algorithm": ""\n countries: #Events will be generated only for below countries\n - CR\n - BR\n targetTopics: #Target topics list. It is array of pairs topic name and event Kind. Only simple and full event kind are allowed.\n - topic: dev-out-simple-int_test\n eventKind: simple\n - topic: dev-out-full-int_test\n eventKind: full\n\n...\nthen the playbook install_mdmgw_services.yml needs to be invoked to update runtime configuration."
},
{
"title": "lookup_values_export_to_s3",
"pageID": "310943435",
"pageLink": "/display/GMDM/lookup_values_export_to_s3",
"content": "DescriptionProcess used to extract lookup values from mongo and upload it to s3. The file from s3 i then pulled into snowflake.Examplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=lookup_values_export_to_s3_gbl_prod"
},
{
"title": "MAPP IDL Export process",
"pageID": "164470173",
"pageLink": "/display/GMDM/MAPP+IDL+Export+process",
"content": "DescriptionProcess used to generate excel with entities export. Export is based on two monogo collections: lookupValues and entityHistory. Excel files are then uploaded into s3 directoryExcels are used in MAPP Review process on gbl_prod environment.Examplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=mapp_idl_excel_template_gbl_prod"
},
{
"title": "mapp_update_idl_export_config",
"pageID": "310943437",
"pageLink": "/display/GMDM/mapp_update_idl_export_config",
"content": "DescriptionProcess is used to update configuration of mapp_idl_excel_template dags stored in mongo.Configuration is stored in mappExportConfig collection and consists of information about configuration and crosswalks order for each country.Examplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=mapp_update_idl_export_config_gbl_prod"
},
{
"title": "merge_unmerge_entities",
"pageID": "310943439",
"pageLink": "/display/GMDM/merge_unmerge_entities",
"content": "DescriptionThis dag implements batch Batch merge & unmerge process. It download file from s3 with list of files to merge or unmerge and then process documents. To process documents batch-service is used. After documents are processed report is generated and transferred to s3 directory.FlowBatch service batch creationDownloading source file from s3Input file conversion to unix formatFile processingRecords are sent to batch service using /bulkService endpoint.After all entities are sent then Loading stage is closed and statistics are written to stage statisticsWaiting for batch to be completedrecords sent to batch service are then transferred to manager internal topic and then processed by manager which sends requests to Reltio. If all events are processed then batch processing stage is closed which causes whole batch to be completed.Report is generated using batchEntittyProcessStatus mongo collection and saved in temporary report collectionReport is exported and saved in s3 bucket altogether with input fileInput directory is cleared Tmp report mongo collection is dropped Examplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=merge_unmerge_entities_emea_prod"
},
{
"title": "micro_bricks_reload",
"pageID": "310943463",
"pageLink": "/display/GMDM/micro_bricks_reload",
"content": "DescriptionDag extract data from snowflake table that contains microbricks exceptions. Data is then comited in git repository from where it will be pulled by consul and loaded into mdmhub components.If microbricks mapping file has changed since last dag run then we'll wait for mapping reload and  copy events from {{ env_name }}-internal-microbricks-changelog-events topic into {{ env_name }}-internal-microbricks-changelog-reload-events"Examplehttps://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=micro_bricks_reload_amer_prod"
},
{
"title": "move_ods_",
"pageID": "310943441",
"pageLink": "/pages/viewpage.action?pageId=310943441",
"content": "DescriptionDag copies files from external source s3 buckets and uploads them to our internal s3 bucket to the desired location. This data is later used in inc_batch_* dagsExamplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=move_ods_eu_export_gbl_prod"
},
{
"title": "rdm_errors_report",
"pageID": "310943445",
"pageLink": "/display/GMDM/rdm_errors_report",
"content": "DEPRECATEDDescriptionThis dags generate report with all rdm errors from ErrorLogs collection and publish it to s3 bucket.Examplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=rdm_errors_report_gbl_prod"
},
{
"title": "reconcile_entities",
"pageID": "337846202",
"pageLink": "/display/GMDM/reconcile_entities",
"content": "Details:Process allowing export data from mongo based on query and generate https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/Reconciliation/reconcileEntities request for each package or generate a flat file from exported entities and push to Kafka reltio-events.Steps:Pull config from requeste.g. {'entitiesQuery': {'country': {'$in': ['FR']}, 'sources': {'$in': ['ONEKEY']}}}Drop mongo collections used in previous runGenerating list of entities and/or relations to reconcile using provided queryTrigger /reconciliation/entities and/or /reconciliation/relations endpoint for all entities and relations from the list from previous step. This will cause generating Reltio event and sending it to Hub processing.Examplehttps://airflow-emea-nprod-gbl-mdm-hub.COMPANY.com/tree?dag_id=reconcile_entities_emea_dev&root="
},
{
"title": "reconciliation_ptrs",
"pageID": "310943447",
"pageLink": "/display/GMDM/reconciliation_ptrs",
"content": "DEPRECATEDDetailsProcess allowing to reconcile events for ptrs source.Logic: Reconciliation processSteps:Downloading input file with checksums from s3 directoryDrop mongo collections used in previous runInporting input file into mongo reconciliation_ptrs collection and prepare output collection reconciliationRecords_ptrsTrigger /resendLastEvent publisher endpoint to resend event for each entity from input file that checksum differs. This will cause event to be generated to ptrs output topicExamplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=reconciliation_ptrs_emea_prod"
},
{
"title": "reconciliation_snowflake",
"pageID": "310943449",
"pageLink": "/display/GMDM/reconciliation_snowflake",
"content": "DetailsProcess allowing to reconcile events for snowflake topic.Logic: Reconciliation processSteps:Downloading input file with entities checksums from s3 directoryDrop mongo collections used in previous runInporting input file into mongo reconciliation_snowflake collection and prepare output collection reconciliationRecords_snowflakeTrigger /resendLastEvent publisher endpoint to resend event for each entity from input file that checksum differs. This will cause event to be generated to snowflake topic and consumed by snowflake kafka connectorExamplehttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/graph?dag_id=reconciliation_ptrs_emea_prod"
},
{
"title": "Kubernetes",
"pageID": "218693740",
"pageLink": "/display/GMDM/Kubernetes",
"content": ""
},
{
"title": "Platform Overview",
"pageID": "218452673",
"pageLink": "/display/GMDM/Platform+Overview",
"content": "In the latest physical architecture, MDM HUB services are deployed in Kubernetes clusters managed by COMPANY Digitial Kubernates Service (PDKS)There are non-prod and prod cluster for each region: AMER, EMEA, APAC ArchitectureThe picture below presents the layout of HUB services in Kubernetes cluster managed by PDKS  NodesThere are two groups of nodes:Static, stateful nodes that have Portworx storage configured dedicated for running backend stateful servicesInstance Type:  r5.2xlargeNode labels: mdmhub.COMPANY.com/node-type=staticDynamic nodes - dedicated for stateless services that are dynamically scaledInstance Type:  m5.2xlargeNode labels: mdmhub.COMPANY.com/node-type=dynamicStoragePortworx storage appliance is used to manage persistence volumes required by stateful components.Configuration: Default storage Class:  pwx-repl2-scReplication: 2Operators MDM HUB uses K8 operators to manage applications like:Application NameOperator (with link)VersionMongoDBMongo Comunity operator0.6.2KafkaStrimzi0.27.xElasticSearchElasticsearch operator1.9.0PrometheusPrometheus operator8.7.3MonitoringCluster are monitored by local Prometheus service integrated with central Prometheus and Grafana services For details got to monitoring section.Logging All logs from HUB components are sent to Elastic service and can be discovered by Kibana UI.For details got to Kibana dashboard section. Backend componentsNameVersionMongoDB4.2.6Kafka2.8.1ElasticSearch7.13.1Prometheus2.15.2Scaling TO BE ImplementationKubernetes objects are implemented using helm - package manager for Kubernetes. There are several modules that connected together makes the MDMHUB application:operators - delivers a set of operators used to manage backend components of MDMHUB: Mongo operator, Kafka operator, Elasticsearch operator, Kong operator and Prometheus operator,consul - delivers consul server instance, user management tools and git2consul - the tool used to synchronize consul key-value registry with a git repository,airflow - deploys an instance of Airflow server,eck - using Elasticsearch operator creates EFK stack - Kibana, Elasticsearch and Fluentd,kafka - installs Kafka server,kafka-resources - installs Kafka topics, Kafka connector instances, managed users and ACLs,kong - using Kong operators installs a Kong server,kong-resources - delivers basic Kong configuration: users, plugins etc,mongo - installs mongo server instance, configures users and their permissions,monitoring - install Prometheus server and exporters used to monitors resources, components and endpoints,migration - a set of tools supported migration from old (ec2 based environments) to new Kubernetes infrastructure,mdmhub - delivers the MDMHUB components, their configuration and dependencies.All above modules are stored in application source code as a part of module helm.ConfigurationThe runtime configuration is stored in mdm-hub-cluster-env repository. Configuration has following structure:[region]/ - MDMHUB rerion eg: emea, amer, apac    nprod|prod/ -  cluster class. nprod or prod values are possible,        namespaces/ - logical spaces where MDMHUB coponents are deployed            monitoring/ - configuration of prometheus stack                service-monitors/                values.yaml - namespace level variables            [region]-dev/ - specific configuration for dev env eg.: kafka topics, hub components configuration                config_files/ - MDMHUB components configuration files                    all|mdm-manager|batch-service|.../                values.yaml - variables specific for dev env.                kafka-topics.yaml - kafka topic configuration            [region]-qa/ - specific configuration for qa env                config_files/                    all|mdm-manager|batch-service|.../            [region]-stage/ - specific configuration for stage env                config_files/                    all|mdm-manager|batch-service|.../                values.yaml                kafka-topics.yaml            [region]-prod/ - specific configuration for prod env                config_files/                    all|mdm-manager|batch-service|.../                values.yaml                kafka-topics.yaml            [region]-backend/ - backend services configuration: EFK stack, Kafka, Mongo etc.                eck-config/ #eck specific files                values.yaml            kong/ - configuration of Kong proxy                values.yaml            airflow/ - configuration of Airflow scheduler                values.yaml        users/ #users configuration            mdm_test_user.yaml            callback_service_user.yaml            ...        values.yaml #cluster level variables        secrets.yaml #cluster level sensitive data    values.yaml #region level variablesvalues.yaml #values common for all environments and clustersinstall.sh #implementation of deployment procedureApplication is deployed by install.sh script. The script does this in the following steps:Decrypt sensitive data: passwords, certificates, token, etc,Prepare the order of values and secrets precedence (the last listed variables override all other variables):common values for all environments,region values,cluster variables,users values,namespace values.Download helm package,Do some package customization if required,Install helm package to the selected cluster.DeploymentBuildJob: mdm-hub-inbound-services/feature/kubernatesDeployAll Kubernetes deployment jobsAMER:Deploy backend: Kong, Kafka, mongoDB, EFK, Consul, Airflow, PrometheusDeploy MDM HUBAdministrationAdministration tasks and standard operating procedures were described here."
},
{
"title": "Migration guide",
"pageID": "218452659",
"pageLink": "/display/GMDM/Migration+guide",
"content": "Phase 0Validate configuration:validate if all configuration was moved correctly - compare application.yml files, check topic name prefix (on k8s env the prefix has 2 parts), check Reltio confguration etc,Check if reading event from sqs is disabled on k8s - reltio-subscriber,Check if reading evets from MAP sqs is disabled on k8s - map-channel,Check if event-publisher is configured to publish events to old kafka server - all client topics (*-out-*) without snowflake.Check if network traffic is opened:from old servers to new REST api endpoint,from k8s cluster to old kafka,from k8s cluster to old REST API endpoint,Make a mongo dump of data collections from mongo - remember start date and time:find mongo-migration-* pod and run shell on it.cd /opt/mongo_utils/datamkdir datacd datanohup dumpData.sh <source database schema> &start date is shown in the first line of log file:head -1 nohup.out #example output → [Mon Jul  4 12:09:32 UTC 2022] Dumping all collections without: entityHistory, entityMatchesHistory, entityRelations and LookupValues from source database mongovalidate the output of dump tool by:cd /opt/mongo_utils/data/data && tail -f nohup.outRestore dumped collections in the new mongo instance:cd /opt/mongo_utils/data/datamv nohup.out nohup.out.dumpnohup mongorestore.sh dump/ <target database schema> <source database schema> &tail -f nohup.out #validate the outputValidate the target database and check if only entityHistory, entityMatchesHistory, entityRelations and LookupValues coolections were copied from source. If there are more collections than mentioned, you can delete them.Create a new consumer group ${new_env}-event-publisher for sync-event-publisher component on topic ${old_env}-internal-reltio-proc-events located on old Kafka instance. Set offset to start date and time of mongo dump - do this by command line client because Akhq has a problem with this action,Configure and run sync-event-publisher - it is responsible for the synchronization of mongo DB with the old environment. The component has to be connected with the old Kafka and Manager and the routing rules list has to be empty,Phase 1(External clients are still connected to old endpoints of rest services and kafka):Check if something is waiting for processing on kafka topics and there are active batches in batch service,If there is a data on kafka topics stop subscriber and wait until all data in enricher, callback and publisher will be processed. Check it out by monitoring input topics of these components,Wait unit all data will be processed by the snowflake connector,Disable Jenkins jobs,Stop outbound (mdmhub) components,Stop inbound (mdmgw) components,Disable all Airflow's DAGs assigned to the migrated environment,Turn off the snowflake connector at the old environment,Turn off sync-event-publisher on k8s environment,Run Mongo Migration Tool to copy mongo databases - copy only collections with caches, data collections were synced before (mongodump + sync-event-publisher). Before start check collections in old mongo instance. You can delete all temporary collections lookup_values_export_to_s3_*, reconciliation_* etc.#dumpingcd /opt/mongo_utils/datamkdir non_datacd non_datanohup dumpNonData.sh <source database schema> &tail -f nohup.out #validate the output#restoringnohup mongorestore.sh dump/ <target database schema> <source database schema> &tail -f nohup.out #validate the outputEnable reltio subscriber on K8s - check SQS credentials and turn on SQS route,Enable processing events on MAP sqs queues - if map-channel exists on migrated environment,Reconfigure Kong:forward all incoming traffic to the new instance of MDMHUBinclude rules for API paths from: \n MR-3140\n -\n Getting issue details...\n STATUS\n Delete all plugins oauth and key-auth plugins https://docs.konghq.com/gateway-oss/2.5.x/admin-api/#delete-pluginit might be required to remove routes, when ansible playbook will throw a duplication error https://docs.konghq.com/gateway-oss/2.5.x/admin-api/#delete-routeStart Snowflake connector located at k8s cluster, Turn on components (without sync-event-publisher) on k8s environment,Change api url and secret (manager apikey) in snowflake deployment configuration (Ansible)Chnage api key in depenedent api routers.Install Kibana dashboards,Add mappings to Monstache,Add transaction topics to fluentd.Phase 2 (Environment run in K8s):Run Kibana Migration Tool to copy indexes, - after migration,Run Kafka Mirror Maker to copy all data from old output topics to new ones.Phase 2 (All external clients confirmed that they switched their applications to new endpoints):Wait until all clients will be switched to new endpoints,Phase 3 (All environments are migrated to kubernetes):Stop old mongo instance,Stop fluentd and kibana,Stop Kafka Mirror MakerStop kafka and kong at old environment,Decommission old environment hosts.To remember after migrationReview CPU requests on k8s https://pdcs-som1d.COMPANY.com/c/c-57wsz/monitoring + Resource management for components - doneMongoDB on k8s has only 1 instanceKong API delete plugin - https://docs.konghq.com/gateway-oss/2.5.x/admin-api/#delete-pluginK8s add consul-server service to ingress - consul ui already exposes API https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1/kv/Consul UI redirect doesn't work due to consul being stubborn about using /ui path. Decision: skip this, send client new consul address Fix issue with MDMHUB manage and batch-service oauth user being duplicated in mappings - doneVerify if mdm hub components are using external api address and switch to internal k8s service address - checked, confirmed nothing is using external addressesCheck if Portworx requires setting affinity rules to be running only on 3 nodesakhq - disable default k8s token automount - done"
},
{
"title": "PDKS Cluster tests",
"pageID": "228917568",
"pageLink": "/display/GMDM/PDKS+Cluster+tests",
"content": "AssumptionsAddresses used in testsAPI: https://api-amer-nprod-gbl-mdm-hub.COMPANY.com/api-batch-amer-dev/actuator/health/KafkaConsulK8s resources3 static EC2 nodesCPU reserved >67%RAM reserved >67%0-4 dynamic EC2 nodes in Auto Scaling Group, scaled based on loadEach MDM Hub app deployed in 1 replica, so no redundancy.Failover testsExpected resultsNo downtimes of API and all services exposed to clients.ScenarioOne EKS node downForce node drain with timeout and grace period set to low 10 seconds. ResultsOne EKS node downAPI was unavailable for ~1 or ~3 minutes. Unavailability was handled correctly by Kong by sending HTTP 500 responsesStatic nodes resources were reserved in more than 67%, so draining 1 of 3 nodes caused scaling up dynamic nodesEvery time K8s managed to start new pod and heal all servicesThere was no need for manual operational work to fix anythingConclusionsTest was partially successfulFailover workedAPI downtime was shortNo operational work was requiredTo remove risk of services unavailabilityIncrease number of MDM Hub instancesTo reduce time of services unavailabilityTest if reducing Readiness time of a Pod to less than 60s could workScale testsExpected resultsEKS node scaling up and down should be automatic based on cluster capacity. ScenariosScale pods up, to overcome capacity of static ASG, then scale down.ResultsScale up and down test was carried out while doing failover tests. When 1 of 3 static nodes became unavailable, ASG scaled up number of dynamic instances. First to 1 and then to 2. After a static node was once again operational, ASG scaled down dynamic nodes to 0.Conclusions"
},
{
"title": "Portworx - storage administration guide",
"pageID": "218458438",
"pageLink": "/display/GMDM/Portworx+-+storage+administration+guide",
"content": "OutdatedPortworx is not longer used in MDM Hub Kubernetes clustersPortworx, what is it?Commercial product, validated storage solution and a standard for PDKS Kubernetes clusters. It uses AWS EBS volumes, adds a replication and provides a k8s storage class as a result. It then can be used just as any k8s storage by defining PVC. What problem does it solve?How to:use Portworx storageConfigure Persistent Volume Claim to use one of Portworx Storage Classes configured on K8s.2 classes are availablepwx-repl2-sc - storage has 2 replicas - use on non-prodpwx-repl3-sc - storage has 3 replicasextend volumesIn Helm just change PVC requested size and deploy changes to a cluster with a Jenkins job. No other action should be required. Example change: MR-3124 change persistent volumes claimscheck status, statistics and alertsTBDOne of the tools should provide volume status and statistics:https://mdm-monitoring.COMPANY.com/grafana/d/garysdevil-kube-state-metrics-v2/kube-state?orgId=1&refresh=30s&from=now-1h&to=nowhttps://amrdrml472.COMPANY.com:9443/loginhttps://us2.app.sysdig.com/api/saml/COMPANY?product=SDCResponsibilitiesWho is responsible for what is described in the table below. In short: if any change in Portworx setup is required, create a support ticket to a queue found on Support information with queues names page.Additional documentationPDCS Kubernetes Storage Management Platform Standards (If link doesn't work, go to http://containers.COMPANY.com/ search in "PDKS Docs" section for "WTST-0299 PDCS Kubernetes Storage Management Platform Standards")Kubernetes Portworx storage class documentationPortworx on Kubernetes docs"
},
{
"title": "Resource management for components",
"pageID": "218444330",
"pageLink": "/display/GMDM/Resource+management+for+components",
"content": "OutdatedMDM Hub components resources are managed automatically by the Vertical Pod Autoscaler - table below is no longer applicableK8s resource requests vs limits Quotes on how to understand Kubernetes resource limitsrequests is a guarantee, limits is an obligationGalo NavarroWhen you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for Pods. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled containers is less than the capacity of the node. Note that although actual memory or CPU resource usage on nodes is very low, the scheduler still refuses to place a Pod on a node if the capacity check fails. This protects against a resource shortage on a node when resource usage later increases, for example, during a daily peak in request rate.How Pods with resource requests are scheduledMDM Hub resource configuration per componentIMPORTANT: table is outdated. The current CPU and memory configuration are in mdm-hub-cluster-env git repository.CPU [m]Memory [Mi]ComponentRequestLimitRequestLimitmdm-callback-service200400016002560mdm-hub-reltio-subscriber2001000400640mdm-hub-event-publisher20020008001280mdm-hub-entity-enricher20020008001280mdm-api-router20040008001280mdm-manager200400010002000mdm-reconciliation-service200400016002560mdm-batch-service20020008001280Kafka500400010000 (Xmx 3GB)20000Zookeeper2001000256512akhq100500256512kafka-connect500200010002000MongoDB50040002000032000MongoDB agent200400200500Elasticsearch5002000800020000Kibana100200010241536Airflow - scheduler2007005122048Airflow - webserver2007002561024Airflow - postgresql250-256-Airflow - statsd200500256512Consul100500256512git2consul100500256512Kong10020005122048Prometheus200100015363072Legendrequires tuningproposaldeployedUseful linksLinks helpful when talking about k8s resource management:Resource Management for Pods and ContainersHow Pods with resource requests are scheduledSizing Kubernetes pods for JVM apps without fearing the OOM KillerMDM Hub Kubernetes cluster configuration git repository"
},
{
"title": "Standards and rules",
"pageID": "218435163",
"pageLink": "/display/GMDM/Standards+and+rules",
"content": "K8s Limit definitionLimit size for CPU has to be defined in "m" (milliCPU), ram in "Mi" (mibibytes) and storage in "Gi" (Gibibytes). More details about resource limits you can find on https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/GB vs GiB: Whats the Difference Between Gigabytes and Gibibytes?At its most basic level, one GB is defined as 1000³ (1,000,000,000) bytes and one GiB as 1024³ (1,073,741,824) bytes. That means one GB equals 0.93 GiB. Source: https://massive.io/blog/gb-vs-gib-whats-the-difference/To check current resource configuration, check: Resource management for componentsDockerTo secure our images from changing of remote images which come from remote registries such as https://hub.docker.com/ before using remote these as a base image in the implementation, you have to publish the remote image in our private registry http://artifactory.COMPANY.com/mdmhub-docker-dev.Kafka objects naming standardsKafka topicsName template: <$envName>-$<topicType>-$<name>Topic Types: in - topics for producing events by external systemsout - topics for consuming events by external systemsinternal - topics used by HUB servicesConsumer GroupsName template: <$envName>-<$componentName>-[$processName]Standardized environment namesamer-devemea-qagblus-stagegbl-prodetc.Standardized component namesbatch-servicecallback-servicemdm-managerevent-publisherapi-routerreconciliation-servicereltio-subscriber"
},
{
"title": "Technical details",
"pageID": "218440550",
"pageLink": "/display/GMDM/Technical+details",
"content": "NetworkSubnet nameSubnet maskRegionDetailssubnet-07743203751be58b910.9.64.0/18amersubnet-0dec853f7c9e507dd10.9.0.0/18amersubnet-018f9a3c441b24c2b●●●●●●●●●●●●●●●apacsubnet-06e1183e436d67f2910.116.176.0/20apacsubnet-0e485098a41ac03ca10.90.144.0/20emeasubnet-067425933ced0e77f10.90.128.0/20emea"
},
{
"title": "SOPs",
"pageID": "228923665",
"pageLink": "/display/GMDM/SOPs",
"content": "Standard operation procedures are available here."
},
{
"title": "Downstream system migration guide",
"pageID": "218452663",
"pageLink": "/display/GMDM/Downstream+system+migration+guide",
"content": "This chapter describes steps that you have to take if you want to switch your application to new MDM HUB instance.Direct channel (Rest services)If you use the direct channel to communicate with MDM HUB the only thing that you should do is changing of API endpoint addresses. The authentication mechanism, based on oAuth serving by Ping Federate stays unchanged. Please remember that probably network traffic between your services and MDMHUB has to be opened before switching your application to new HUB endpoints.The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with MDMHUB has to use new endpoints.EnvironmentOld endpointNew endpointAffected clientsDescriptionGBLUS DEV/QA/STAGEhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/v1https://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/v1ETLConsulGBLUS DEVhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/dev-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-devCDW, ENGAGE, KOL_ONEVIEW, GRV, GRACE, ICUE, ESAMPLES, MULEManager APIGBLUS DEVhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/dev-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-devETLBatch APIGBLUS QAhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/qa-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-qaCDW, ENGAGE, KOL_ONEVIEW, GRV, GRACE, ICUE, ESAMPLES, MULEManager APIGBLUS QAhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/qa-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-qaETL,Batch APIGBLUS STAGEhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/stage-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-stageCDW, ENGAGE, KOL_ONEVIEW, GRV, GRACE, ICUE, ESAMPLES, MULEManager APIGBLUS STAGEhttps://gbl-mdm-hub-us-nprod.COMPANY.com:8443/stage-batch-exthttps://api-amer-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-stageETL,Batch APIGBLUS PRODhttps://gbl-mdm-hub-us-prod.COMPANY.com/v1https://consul-amer-prod-gbl-mdm-hub.COMPANY.com/v1ETLConsulGBLUS PRODhttps://gbl-mdm-hub-us-prod.COMPANY.com/prod-exthttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gblus-prodCDW, ENGAGE, KOL_ONEVIEW, GRV, GRACE, ICUE, ESAMPLES, MULEManager APIGBLUS PRODhttps://gbl-mdm-hub-us-prod.COMPANY.com/prod-batch-exthttps://api-amer-prod-gbl-mdm-hub.COMPANY.com/ext-api-batch-gblus-prodETLBatch APIEMEA DEV/QA/STAGEhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/v1https://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/v1ETLConsulEMEA DEVhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/dev-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-emea-devMULE, GRV, PforceRx, JORouter APIEMEA DEVhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/dev-ext/gwhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-devManager APIEMEA DEVhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/dev-batch-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-emea-devETLBatch APIEMEA QAhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/qa-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-emea-qaMULE, GRV, PforceRx, JORouter APIEMEA QAhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/qa-ext/gwhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-qaManager APIEMEA QAhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/qa-batch-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-emea-qaETLBatch APIEMEA STAGEhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/stage-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-emea-stageMULE, GRV, PforceRx, JORouter APIEMEA STAGEhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/stage-ext/gwhttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-stageManager APIEMEA STAGEhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/stage-batch-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-batch-emea-stageETLBatch APIEMEA PRODhttps://gbl-mdm-hub-emea-prod.COMPANY.com:8443/v1https://consul-emea-prod-gbl-mdm-hub.COMPANY.com/v1ETLConsulEMEA PRODhttps://gbl-mdm-hub-emea-prod.COMPANY.com:8443/prod-exthttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-emea-prodMULE, GRV, PforceRxRouter APIEMEA PRODhttps://gbl-mdm-hub-emea-nprod.COMPANY.com:8443/prod-ext/gwhttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-prodManager APIEMEA PRODhttps://gbl-mdm-hub-emea-prod.COMPANY.com:8443/prod-batch-exthttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-batch-emea-prodBatch APIGBL DEVhttps://mdm-reltio-proxy.COMPANY.com:8443/dev-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-devMULE, GRV, JO, KOL_ONEVIEW, MAPP, MEDIC, ONEMED, PTRS, VEEVA_FIELD,Manager APIGBL QA (MAPP)https://mdm-reltio-proxy.COMPANY.com:8443/mapp-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-qaMULE, GRV, JO, KOL_ONEVIEW, MAPP, MEDIC, ONEMED, PTRS, VEEVA_FIELD,Manager APIGBL STAGEhttps://mdm-reltio-proxy.COMPANY.com:8443/stage-exthttps://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-stageMULE, GRV, JO, KOL_ONEVIEW, MEDIC, ONEMED, PTRS, VEEVA_FIELDManager APIGBL PRODhttps://mdm-gateway.COMPANY.com/prod-exthttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/ext-api-gw-gbl-prodMULE, GRV, JO, KOL_ONEVIEW, MAPP, MEDIC, ONEMED, PTRS, VEEVA_FIELDManager APIGBL PRODhttps://mdm-gateway-int.COMPANY.com/gw-apihttps://api-emea-k8s-prod-gbl-mdm-hub.COMPANY.com/api-gw-gbl-prodCHINAManager APIEXTERNAL GBL DEVhttps://mdm-reltio-proxy.COMPANY.com:8443/dev-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-gw-gbl-devMAP, GANT, MAPPManager APIEXTERNAL GBL QA (MAPP)https://mdm-reltio-proxy.COMPANY.com:8443/mapp-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-gw-gbl-qaMAP, GANT, MAPPManager APIEXTERNAL GBL STAGEhttps://mdm-reltio-proxy.COMPANY.com:8443/stage-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-gw-gbl-stageMAP, GANT, MAPPManager APIEXTERNAL GBL PRODhttps://mdm-gateway.COMPANY.com/prod-exthttps://api-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com/ext-api-gw-gbl-prodMAP, GANT, MAPPManager APIEXTERNAL EMEA DEVhttps://api-emea-nprod-gbl-mdm-hub-ext.COMPANY.com:8443/dev-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-emea-devMAP, GANT, MAPPRouter APIEXTERNAL EMEA QAhttps://api-emea-nprod-gbl-mdm-hub-ext.COMPANY.com:8443/qa-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-emea-qaMAP, GANT, MAPPRouter APIEXTERNAL EMEA STAGEhttps://api-emea-nprod-gbl-mdm-hub-ext.COMPANY.com:8443/stage-exthttps://api-emea-k8s-nprod-gbl-mdm-hub-ext.COMPANY.com/ext-api-emea-stageMAP, GANT, MAPPRouter APIEXTERNAL EMEA PRODhttps://api-emea-prod-gbl-mdm-hub-ext.COMPANY.com:8443/prod-exthttps://api-emea-k8s-prod-gbl-mdm-hub-ext.COMPANY.com/ext-api-emea-prodMAP, GANT, MAPPRouter APIStreaming channel (Kafka)Switching to a new environment requires configuration change on your side:Change the Kafka's broker address,Change JAAS configuration - in the new architecture, we decided to change JAAS authentication mechanisms to SCRAM. To be sure that you are using the right authentication you have to change a few parameters in Kafka's connection:JAAS login config file which path is specified in "java.security.auth.login.config" java property. It should look like below:KafkaClient {  org.apache.kafka.common.security.scram.ScramLoginModule required username="<user>" ●●●●●●●●●●●●●●●●●●●>";};                   b.  change the value of "sasl.mechanism" property to "SCRAM-SHA-512"                   c. if you configure JAAS login using "sasl.jaas.config" property you have to change its value to "org.apache.kafka.common.security.scram.ScramLoginModule required username="<user>" ●●●●●●●●●●●●●●●●●●●>";"You should receive new credentials (username and password) in the email about changing Kafka endpoints. In another case to get the proper username and ●●●●●●●●●●●●●●● contact our support team.The following table presents old endpoints and their substitutes in the new environment. Everyone who wants to connect with MDMHUB has to use new endpoints.EnvironmentOld endpointNew endpointAffected clientsDescriptionGBLUS DEV/QA/STAGEamraelp00007335.COMPANY.com:9094kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094ENGAGE, KOL_ONEVIEW, GRV, ICUE, MULEKafkaGBLUS PRODamraelp00007848.COMPANY.com:9094,amraelp00007849.COMPANY.com:9094,amraelp00007871.COMPANY.com:9094kafka-amer-prod-gbl-mdm-hub.COMPANY.com:9094ENGAGE, KOL_ONEVIEW, GRV, ICUE, MULEKafkaEMEA DEV/QA/STAGEeuw1z2dl112.COMPANY.com:9094mdm-reltio-proxy.COMPANY.com:9094 (external)kafka-b1-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MAP (external), PforceRx, MULEKafkaEMEA PRODeuw1z2pl116.COMPANY.com:9094,euw1z1pl117.COMPANY.com:9094,euw1z2pl118.COMPANY.com:9094kafka-b1-emea-prod-gbl-mdm-hub.COMPANY.com:9094,kafka-b2-emea-prod-gbl-mdm-hub.COMPANY.com:9094,kafka-b3-emea-prod-gbl-mdm-hub.COMPANY.com:9094kafka-b1-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095,kafka-b2-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095,kafka-b3-emea-prod-gbl-mdm-hub-ext.COMPANY.com:9095 (external)kafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094MAP (external), PforceRx, MULEKafkaGBL DEV/QA/STAGEeuw1z1dl037.COMPANY.com:9094mdm-reltio-proxy.COMPANY.com:9094 (external)kafka-b1-emea-k8s-nprod-gbl-mdm-hub.COMPANY.com:9094MAP (external), China, KOL_ONEVIEW, PTRS, PTE, ENGAGE, MAPP,KafkaGBL PRODeuw1z1pl017.COMPANY.com:9094,euw1z1pl021.COMPANY.com:9094,euw1z1pl022.COMPANY.com:9094mdm-broker-p1.COMPANY.com:9094,mdm-broker-p2.COMPANY.com:9094,mdm-broker-p3.COMPANY.com:9094 (external)kafka-b1-emea-k8s-prod-gbl-mdm-hub.COMPANY.com:9094MAP (external), China, KOL_ONEVIEW, PTRS, ENGAGE, MAPP,KafkaEXTERNAL GBL DEV/QA/STAGEData Mart (Snowflake)There are no changes required if you use Snowflake to get MDMHUB data."
},
{
"title": "MDM HUB Log Management",
"pageID": "164470115",
"pageLink": "/display/GMDM/MDM+HUB+Log+Management",
"content": "MDM HUB has built in a log management solution that allows to trace data going through the system (incoming and outgoing events).It improves:TraceabilityAbility to trace input and output dataCompliance requirementsSecurityAny user activity is recordedThreat protection and discoveryMonitoringOutages & performance bottlenecks detectionAnalytics Metrics & trends in real-timeAnomalies detectionThe solution is based on EFK stack:ElasticSearch - provides storage and indexing and search capabilitiesFluentd - ships, transforms and loads logsKibana - provides UI for usersThe solutions is presented on the picture below: HUB microservices generetes log events and place them on KAFKA monitoring topics.Fluentd  processes events from topics and store them in ElasticSearch. Kibana presents data to users.    "
},
{
"title": "EFK Environments",
"pageID": "164470092",
"pageLink": "/display/GMDM/EFK+Environments",
"content": ""
},
{
"title": "Elastic Cloud on Kubernetes in MDM HUB",
"pageID": "284787486",
"pageLink": "/display/GMDM/Elastic+Cloud+on+Kubernetes+in+MDM+HUB",
"content": "Overview<graphic0>After migration on Kubernetes platform from on premise solutions we started to use Elastic Cloud on Kubernetes (ECK).https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-overview.html With ECK we can streamline critical operations, such as:Setting up hot-warm-cold architectures.Providing lifecycle policies for logs and transactions, snapshots of obsolete/older/less utility data.Creating dashboards visualising data of MDM HUB core processes.Logs, transactions and mongo collectionsWe splitted all the data entering the Elastic Stack cluster into different categories listed as follows:1. MDM HUB services logsFor forwarding MDM HUB services logs we use FluentBit where its used as a sidecar/agent container inside the mdmhub service pod.The sidecar/agents send data directly to a backend service on Kubernetes cluster.2. Backend logs and transactionsFor backend logs and transactions forwarding we use Fluentd as a forwarder and aggregator, lightweight pod instance deployed on edge.In case of Elasticsearch unavailability, secondary output is defined on S3 storage to not miss any data coming from services.3. MongoDB collectionsIn this scenario we decided to use Monstache, sync daemon written in Go that continously indexes MongoDB collections into Elasticsearch.We use it to mirror Reltio data gathered in MongoDB collections in Elasticsearch as a backup and a source for Kibana's dashboards visualisations.Data streamsMDM HUB services and backend logs and transactions are managed by Data streams mechanism.A data stream lets us store append-only time series data (logs/transactions) across multiple indices while giving a single named resource for requests.https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams.htmlIndex lifecycle policies and snapshots managementIndex templates, index lifecycle policies and snapshots for index management are enirely covered by the Elasticsearch built-in mechanisms.Description of the index lifecycle divided into phases:Index rollover - logs and transactions are stored in hot-tiersIndex rollover - logs and transactions are moved to delete phaseSnapshot - deleted logs and transactions from elasticsearch are snapshotted on S3 bucketSnapshot -  logs and transactions are deleted from S3 bucket - index is no longer availableAll snapshotted indices may be restored and recreated on Elasticsearch anytime.Maximum sizes and ages for the indexes rollovers and snapshots are included in the following tables:Non PROD environmentstypeindex rollover hot phaseindex rollover delete phasesnapshot phase MDM HUB logsage: 7dsize: 100gbage: 30dage: 180dBackend logsage: 7dsize: 100gbage: 30dage: 180dKafka transactionsage: 7dsize: 25gbage: 30dage: 180dPROD environmentstypeindex rollover hot phaseindex rollover delete phasesnapshot phase MDM HUB logsage: 7dsize: 100gbage: 90dage: 365dBackend logsage: 7dsize: 100gbage: 90dage: 365dKafka transactionsage: 7dsize: 25gbage: 180dage:  365dAditionally, we execute full snapshot policy on daily basis. It is responsible for incremental storing all the elasticsearch indexes on S3 buckets as a backup. Snapshots locationsenvironmentS3 bucketpathEMEA NPRODpfe-atp-eu-w1-nprod-mdmhubemea/archive/elastic/fullEMEA PRODpfe-atp-eu-w1-prod-mdmhub-backupemaasp202207120811emea/archive/elastic/fullAMER NPRODgblmdmhubnprodamrasp100762amer/archive/elastic/fullAMER PRODpfe-atp-us-e1-prod-mdmhub-backupamrasp202207120808amer/archive/elastic/fullAPAC NPRODglobalmdmnprodaspasp202202171347apac/archive/elastic/fullAPAC PRODpfe-atp-ap-se1-prod-mdmhub-backuaspasp202207141502apac/archive/elastic/fullMongoDB collections data are stored on Elasticsearch permanently, they are not covered by the index lifecycle processes.Kibana dashboardsKibana Dashboard Overview"
},
{
"title": "Kibana Dashboards",
"pageID": "164470093",
"pageLink": "/display/GMDM/Kibana+Dashboards",
"content": ""
},
{
"title": "Tracing areas",
"pageID": "164470094",
"pageLink": "/display/GMDM/Tracing+areas",
"content": "Log data are generated in the following actions:API calls request timestampoperation namerequest payloadresponse statusMDM events timestampmdm nameevent typeevent payload"
},
{
"title": "MDM HUB Monitoring",
"pageID": "164470106",
"pageLink": "/display/GMDM/MDM+HUB+Monitoring",
"content": ""
},
{
"title": "AKHQ",
"pageID": "164470020",
"pageLink": "/display/GMDM/AKHQ",
"content": "AKHQ (https://github.com/tchiotludo/akhq) is a tool for browsing, changing and monitoring Kafka's instances.https://akhq-amer-nprod-gbl-mdm-hub.COMPANY.com/https://akhq-amer-prod-gbl-mdm-hub.COMPANY.com/https://akhq-emea-nprod-gbl-mdm-hub.COMPANY.com/https://akhq-emea-prod-gbl-mdm-hub.COMPANY.com/https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com/https://akhq-apac-prod-gbl-mdm-hub.COMPANY.com/"
},
{
"title": "Grafana & Kibana",
"pageID": "228933027",
"pageLink": "/pages/viewpage.action?pageId=228933027",
"content": "KIBANAUS PROD https://mdm-log-management-us-trade-prod.COMPANY.com:5601/app/kibanaUser: kibana_dashboard_viewUS NONPROD https://mdm-log-management-us-trade-nonprod.COMPANY.com:5601/app/kibanaUser: kibana_dashboard_view=====GBL PROD https://kibana-emea-prod-gbl-mdm-hub.COMPANY.comGBL NONPROD https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com=====EMEA PROD https://kibana-emea-prod-gbl-mdm-hub.COMPANY.comEMEA NONPROD https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com=====GBLUS PROD https://kibana-amer-prod-gbl-mdm-hub.COMPANY.comGBLUS NONPROD https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com=====AMER PROD https://kibana-amer-prod-gbl-mdm-hub.COMPANY.comAMER NONPROD https://kibana-amer-nprod-gbl-mdm-hub.COMPANY.com=====APAC PROD https://kibana-apac-prod-gbl-mdm-hub.COMPANY.comAPAC NONPROD https://kibana-apac-nprod-gbl-mdm-hub.COMPANY.comGRAFANAhttps://grafana-mdm-monitoring.COMPANY.comKeePass - download thisKibana-k8s.kdbxThe password to the KeePass is sent in a separate email to improve the security level of credentials sending.To get access, you only need to download the KeePass application 2.50 version (https://keepass.info/download.html) and use a password that is sent to log in to it.After you do it you will see a screen like:Then just click a title that you are interested in. And you get a window like:Here you have a user name, and a proper link and when you click 3 dots = red square you will get the password."
},
{
"title": "Grafana Dashboard Overview",
"pageID": "164470208",
"pageLink": "/display/GMDM/Grafana+Dashboard+Overview",
"content": "MDM HUB's Grafana is deployed on the MONITORING host and is available under the following URL:https://grafana-mdm-monitoring.COMPANY.comAll the dashboards are built using Prometheus's metrics."
},
{
"title": "Alerts Monitoring PROD&NON_PROD",
"pageID": "163917772",
"pageLink": "/pages/viewpage.action?pageId=163917772",
"content": "PROD: https://mdm-monitoring.COMPANY.com/grafana/d/5h4gLmemz/alerts-monitoring-prodNON PROD: https://mdm-monitoring.COMPANY.com/grafana/d/COVgYieiz/alerts-monitoring-non_prodThe Dashboard contains firing alerts and last Airflow DAG runs statuses for GBL (left side) and US FLEX (right side):a., e. number of alerts firingb., f. turns red when one or more DAG JOBS have failedc., g. alerts currently firingd., h. table containing all the DAGs and their run count for each of the statuses"
},
{
"title": "AWS SQS",
"pageID": "163917788",
"pageLink": "/display/GMDM/AWS+SQS",
"content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/CI4RLieik/aws-sqsThe dashboard is describing the SQS queue used in Reltio→MDM HUB communication.The dashboard is divided into following sections:a. Approximate number of messages - how many messages are currently waiting in the queueb. Approximate number of messages delayed - how many messages are waiting to be added in the queuec. Approximate number of messages invisible - how many messages are not timed out nor deleted"
},
{
"title": "Docker Monitoring",
"pageID": "163917797",
"pageLink": "/display/GMDM/Docker+Monitoring",
"content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/Z1VgYm6iz/docker-monitoringThis dashboard is describing the Docker containers running on hosts in each environment. Switch currently viewed environment/host using the variables at the top of the dashboard ("env", "host").The dashboard is divided into following sections:a. Running containers - how many containers are currently running on this hostb. Total Memory Usagec. Total CPU Usaged. CPU Usage - over time CPU use per containere. Memory Usage - over time Memory use per containerf. Network Rx - received bytes per container over timeg. Network Tx - transmited bytes per container over time"
},
{
"title": "Host Statistics",
"pageID": "163917801",
"pageLink": "/display/GMDM/Host+Statistics",
"content": "\n\n\n\nDashboard: https://mdm-monitoring.COMPANY.com/grafana/d/0RSgLi6mk/host-statisticsDashboard template source: https://grafana.com/grafana/dashboards/1860This dashboard is describing various statistics related to hosts' resource usage. It uses metrics from the node_exporter. You can change the currently viewed environment and host using variables at the top of the dashboard.\n\n\n\n\n\nBasic CPU / Mem / Disk Gaugea. CPU Busyb. Used RAM Memoryc. Used SWAP - hard disk memory used for swappingd. Used Root FSe. CPU System Load (1m avg)f. CPU System Load (5m avg)\n\n\n\n\n\nBasic CPU / Mem / Disk Infoa. CPU Coresb. Total RAMc. Total SWAPd. Total RootFSe. System Load (1m avg)f. Uptime - time since last restart\n\n\n\n\n\nBasic CPU / Mem Grapha. CPU Basic - CPU state %b. Memory Basic - memory (SWAP + RAM) use\n\n\n\n\n\nBasic Net / Disk Infoa. Network Traffic Basic - network traffic in bytes per interfaceb, Disk Space Used Basic - disk usage per mount\n\n\n\n\n\nCPU Memory Net Diska. CPU - percentage use per status/operationb. Memory Stack - use per status/operationc. Network Traffic - detailed network traffic in bytes per interface. Negative values correspond to transmited bytes, positive to received.d. Disk Space Used - disk usage per mounte. Disk IOps - disk operations per partition. Negative values correspond to write operations, positive - read operations.f. I/O Usage Read / Write - bytes read(positive)/written(negative) per partitiong. I/O Usage Times - time of I/O operations in seconds per partition\n\n\n\n\n\nEtc.As the dashboard template is a publicaly-available project, the panels/graphs are sufficiently described and do not require further explanation.\n\n\n"
},
{
"title": "HUB Batch Performance",
"pageID": "163917855",
"pageLink": "/display/GMDM/HUB+Batch+Performance",
"content": "\n\n\n\nDashboard: https://mdm-monitoring.COMPANY.com/grafana/d/gz0X6rkMk/hub-batch-performance\n\n\n\n\n\na. Batch loading rateb. Batch loading latencyc. Batch sending rated. Batch sending latencye. Batch processing rate - batch processing in ops/sf. Batch processing latency - batch processing time in secondsg. Batch loading max gauge - max loading time in secondsh. Batch sending max gauge - max sending time in secondsi. Batch processing max gauge - max processing in seconds\n\n\n"
},
{
"title": "HUB Overview Dashboard",
"pageID": "163917867",
"pageLink": "/display/GMDM/HUB+Overview+Dashboard",
"content": "\n\n\n\nDashboard: https://mdm-monitoring.COMPANY.com/grafana/d/OfVgLm6ik/hub-overviewThis dashboard contains information about Kafka topics/consumer groups in HUB - downstream from Reltio.\n\n\n\n\n\na. Lag by Consumer Group - lag on each INBOUND consumer groupb. Message consume per minute - messages consumed by each INBOUND consumer groupc. Message in per minute - inbound messages count by each INBOUND topicd. Lag by Consumer Group - lag on each OUTBOUND consumer groupe. Message consume per minute - messages consumed by each OUTBOUND consumer groupf. Message in per minute - inbound messages count by each OUTBOUND topicg. Lag by Consumer Group - lag on each INTERNAL BATCH consumer grouph. Message consume per minute - messages consumed by each INTERNAL BATCH consumer groupi. Message in per minute - inbound messages count by each INTERNAL BATCH topic\n\n\n"
},
{
"title": "HUB Performance",
"pageID": "163917830",
"pageLink": "/display/GMDM/HUB+Performance",
"content": "\n\n\n\nDashboard: https://mdm-monitoring.COMPANY.com/grafana/d/ZuVRLmemz/hub-performance\n\n\n\n\n\nAPI Performancea. Read Rate - API Read operations in 5/10/15min rateb. Read Latency - API Read operations latency in seconds for 50/75/99th percentile of requests. Consists of Reltio response time, processing time and total timec. Write Rate - API Write operations in 5/10/15min rated. Write Latency - API Write operations latency in seconds for 50/75/99th percentile of requests per each API operation\n\n\n\n\n\nPublishing Performancea. Event Preprocessing Total Rate - Publisher's preprocessed events 5/10/15min rate divided for entity/relation eventsb. Event Preprocessing Total Latency - preprocessing time in seconds for 50/75/99th percentile of events\n\n\n\n\n\nSubscribing Performancea. MDM Events Subscribing Rate - Subscriber's events rateb. MDM Events Subscribing Latency - Subscriber's event processing (passing downstream) rate\n\n\n"
},
{
"title": "JMX Overview",
"pageID": "163917876",
"pageLink": "/display/GMDM/JMX+Overview",
"content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/MVSRYi6ik/jmx-overviewThis dashboard organizes and displays data extracted from each component by a JMX exporter - related to this component's resource usage. You can switch currently viewed environment/component/node using variables on the top of the dashboard.a. Memoryb. Total RAMc. Used SWAPd. Total SWAPe. CPU System Load(1m avg)f. CPU System Load(5m avg)g. CPU Coresh. CPU Usagei. Memory Heap/NonHeapj. Memory Pool Usedk. Threads usedl. Class loadingm. Open File Descriptorsn. GC time / 1 min. rate - Garbage Collector time rate/mino. GC count - Garbage Collector operations count"
},
{
"title": "Kafka Overview",
"pageID": "163917904",
"pageLink": "/display/GMDM/Kafka+Overview",
"content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/YNIRYmeik/kafka-overviewThis dashboard describes Kafka's per node resource usage.a. CPU Usageb. JVM Memory Usedc. Time spent in GCd. Messages in Per Topice. Bytes in Per Topicf. Bytes Out Per Topic"
},
{
"title": "Kafka Overview - Total",
"pageID": "163917913",
"pageLink": "/display/GMDM/Kafka+Overview+-+Total",
"content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/W6OysZ5Zz/kafka-overview-totalThis dashboard describes Kafka's total (all node summary) resource usage per environment.a. CPU Usageb. JVM Memory Usedc. Time spent in GCd. Messages ratee. Bytes in Ratef. Bytes Out Rate"
},
{
"title": "Kafka Topics Overview",
"pageID": "163917920",
"pageLink": "/display/GMDM/Kafka+Topics+Overview",
"content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overviewThis dashboard describes Kafka topics and consumer groups in each environment.a. Topics purge ETA in hours - approximate time it should take for each consumer group to process all the events on their topicb. Lag by Consumer Groupc. Message in per minute - per topicd. Message consume per minute - per consumer groupe. Message in per second - per topic"
},
{
"title": "Kong Dashboard",
"pageID": "163917927",
"pageLink": "/display/GMDM/Kong+Dashboard",
"content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/mY9p7dQmz/kongThis dashboard describes the Kong component statistics.a. Total requests per secondb. DB reachabilityc. Requests per serviced. Requests by HTTP status codee. Total Bandwidthf. Egress per service (All) - traffic exiting the MDM network in bytesg. Ingress per service (All) - traffic entering the MDM network in bytesh. Kong Proxy Latency across all services - divided on 90/95/99 percentilei. Kong Proxy Latency per service (All) - divided on 90/95/99 percentilej. Request Time across all services - divided on 90/95/99 percentilek. Request Time per service (All) - divided on 90/95/99 percentilel. Upstream Time across all services - divided on 90/95/99 percentilem. Upstream Time per service (All) - divided on 90/95/99 percentileo. Nginx connection statep. Total Connectionsq. Handled Connectionsr. Accepted Connections"
},
{
"title": "MongoDB",
"pageID": "163917945",
"pageLink": "/display/GMDM/MongoDB",
"content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/sTSgLi6iz/mongodba. Query Operationsb. Document Operationsc. Document Query Executord. Member Healthe. Member Statef. Replica Query Operationsg. Uptimeh. Available Connectionsi. Open Connectionsj. Oplog Sizek. Memoryl. Network I/Om. Oplog Lagn. Disk I/O Utilizationo. Disk Reads Completedp. Disk Writes Completed"
},
{
"title": "Snowflake Tasks",
"pageID": "163917954",
"pageLink": "/display/GMDM/Snowflake+Tasks",
"content": "Dashboard: https://mdm-monitoring.COMPANY.com/grafana/d/358IxM_Mz/snowflake-tasksThis dashboard describes tasks running on each Snowflake instance.Please keep in mind that metrics supporting this dashboard are scraped rarely (every 8h on nprod, every 2h on prod), so keep the Time since last scrape gauge in mind when reviewing the results.a. Time since last scrape - time since the metrics were last scraped - it marks dashboard freshnessb. Last Task Runs - table contains:task's name,date&time of last recorded run,visualisation of how long ago was the last run,state of last run,duration of last run (processing time)c. Processing time - visualizes how the processing time of each task was changing over time"
},
{
"title": "Kibana Dashboard Overview",
"pageID": "164469839",
"pageLink": "/display/GMDM/Kibana+Dashboard+Overview",
"content": ""
},
{
"title": "API Calls Dashboard",
"pageID": "164469837",
"pageLink": "/display/GMDM/API+Calls+Dashboard",
"content": "The dashboard contains summary of MDM Gateway API calls in the chosen time range.Use it to:find a certain API call by entity/timestamp/username,check which host this request was sent to,check request processing time etc.The dashboard is divided into the following sections:a. Total requests count - how many requests have been logged in this time range (or passed the filter if that's the case)b. Controls - allows user to filter requests based on username and operationc. Requests by operation - how many requests have been sent per each operationd. Average response time - how long the response time was on average per each actione. Request per client - how many requests have been sent per each clientf. Response status - how many requests have resulted with each statusg. Top 10 processing times - summary of 10 requests that have been processed the longest in this time range. Contains transaction ID, related entity URI, operation type and duration in ms.681pxh. Logs - summary of all the logged requests"
},
{
"title": "Batch Loads Dashboard",
"pageID": "164469855",
"pageLink": "/display/GMDM/Batch+Loads+Dashboard",
"content": "The dashboard contains information about files processed by the Batch Channel component.Use this dashboard to:check whether the files were delivered on schedule,check processing time,verify that the files have been processed correctly.The dashboard is divided into following sections:a. File by type - summary of how many files of each type were delivered in this time range.b. File load status count - visualisation of how many entities were extracted from each file type and what was the result of their processing.c. File load count - visualisation of loaded files in this time range. Use it to verify that the files have been delivered on schedule.d. File load summary - summary of the processing of each loaded file. e. Response status load summary - summary of processing result for each file type."
},
{
"title": "HL DCR Dashboard",
"pageID": "164469753",
"pageLink": "/display/GMDM/HL+DCR+Dashboard",
"content": "This dashboard contains information related to the HL DCR flow (DCR Service).Use it to:track issues related to the HL DCR flow.The dashboard is divided into following sections:a. DCR Status - summary of how many DCRs have each of the statusesb. Reltio DCR Stats - summary of how many DCRs that have been processed and sent to Reltio have each of the statusesc. DCRRequestProcessing report - list of DCR reports generated in this time ranged. DCR Current state - list of DCRs and their current statuses"
},
{
"title": "HUB Events Dashboard",
"pageID": "164469849",
"pageLink": "/display/GMDM/HUB+Events+Dashboard",
"content": "Dashboard contains information about the Publisher component - events sent to clients or internal components (ex. Callback Service).Use it to:track issues related to Publisher's event processing (filtering/publishing),find information about Publisher's event processing time,find potential issues with events not being published from one topic or being constantly skipped etc.The dashboard is divided into following sections:a. Count - how many events have been processed by the Publisher in this time rangeb. Event count - visualisation of how many events have been processed over timec. Simple events in time - visualisation of how many simple events have been processed (published) over time per each outbound topicd. Skipped events in time - visualisation of how many events have been skipped (filtered) for each reason over timee. Full events in time - visualisation of how many full events have been published over time per each topicf. Processing time - visualisation of how long the processing of entities/relations events tookg. Events by country - summary of how many events were related to each countryh. Event types - summary of how many events were of each typei. Full events by Topics - visualisation of how many full events of each type were published on each of the topicsj. Simple events by Topics - visualisation of how many simple events of each type were published on each of the topicsk. Publisher Logs - list containing all the useful information extracted from the Publisher logs for each event. Use it to track issues related to Publisher's event processing."
},
{
"title": "HUB Store Dashboard",
"pageID": "164469853",
"pageLink": "/display/GMDM/HUB+Store+Dashboard",
"content": "Summary of all entities in the MDM in this environment. Contains summary information about entities count, countries and sources. The dashboard is divided into following sections:a. Entities count - how many entities are there currently in MDMb. Entities modification count - how many entity modifications (create/update/delete) were there over timec. Status - summary of how many entities have each of the statusesd. Type - summary of how many entities are HCO (Health Care Organization) or HCP (Health Care Professional)e. MDM - summary of how many MDM entities are in Reltio/Nucleusf. Entities country - visualisation of country to entity countg. Entities source - visualisation of source to entity counth. Entities by country source type - visualisation of how many entities are there from each country with each sourcei. World Map - visualisation of how many entities are there from each countryj. Source/Country Heat Map - another visualisation of Country-Source distribution"
},
{
"title": "MDM Events Dashboard",
"pageID": "164469851",
"pageLink": "/display/GMDM/MDM+Events+Dashboard",
"content": "This dashboard contains information extracted from the Subscriber component.Use it to:confirm that a certain event was received from Reltio/Nucleus,check the consume time.The dashboard is divided into following sections:a. Total events count - how many events have been received and published to an internal topic in this time rangeb. Event types - visualisation of how many events processed were of each typec. Event count - visualisation of how many events were processed over timed. Event destinations - visualisation of how many events have been passed to each of internal topics over timee. Average consume time - visualisation of how long it took to process/pass received events over timef. Subscriber Logs - list containing all the useful information extracted from the Subscriber logs. Use it to track potential issues"
},
{
"title": "Profile Updates Dashboard",
"pageID": "164469751",
"pageLink": "/display/GMDM/Profile+Updates+Dashboard",
"content": "This dashboard contains information about HCO/HCP profile updates via MDM Gateway.Use it to:check how many updates have been processed,check processing results (statuses),track an issue related to the Gateway components.Note, that the Gateway is not only used by the external vendors, but also by HUB's components (Callback Service).The dashboard is divided into following sections:a. Count - how many profile updates have been logged in this time periodb. Updates by status - how many updates have each of the statusesc. Updates count - visualisation of how many updates were received by the Gateway over timed. Updates by country source status - visualisation of how many updates were there for each country, from each source and with each statuse. Updates by source - summary of how many profile updates were there from each sourcef. Updates by country source status - another visualisation of how many updates were there for each country, source, statusg. World Map - visualisation of how many updates were there on profiles from each of the countriesh. Gateway Logs - list containing all the useful information extracted from the Gateway components' logs. Use it to track issues related to the MDM Gateway"
},
{
"title": "Reconciliation metrics Dashboard",
"pageID": "310964632",
"pageLink": "/display/GMDM/Reconciliation+metrics+Dashboard",
"content": "The Reconciliation Metrics Dashboard shows reasons why the MDM object (entity or relation) was reconciled.Use it to:Check how many records were reconciled,Find the reasons for reconciliation.Currently, the dashboard can show the following reasons:reconciliation.lookupcode.error - new lookup error was added. Caused by changes in RDM reconciliation.lookupcode.changed - lookup code was changed. Caused by changes in RDM reconciliation.updatedtime.changed - entity updateTime changed reconciliation.description.changed - Any description attribute changed. Checks attribute path for .*[Dd]escription.* reconciliation.stateprovince.changed - Addresses, Stateprovince value changed  reconciliation.workplace.changed - Workplace changed  reconciliation.rank.changed - /attributes/Rank changed reconciliation.relation.objectlabel.changed - /startObject/label or /endObject/label changed reconciliation.object.missed - Object was removed reconciliation.object.added - Object was added  reconciliation.specialities.changed - Specialities changed(added/removed/replaced) reconciliation.specialities.label.changed - Specialities label changed(added/removed/replaced) reconciliation.mainhco.changed - /attributes/MainHCO changed(added/removed/replaced) reconciliation.address.changed - Any field under Address changed(added/removed/replaced) reconciliation.refentity.changed - Any reference entity changed('^/attributes/.*refEntity.+$' - added/removed/replaced) reconciliation.refrelation.changed - Any reference relationchanged('^/attributes/.*refRelation.+$' - added/removed/replaced) reconciliation.crosswwalk.attributeslist.change - Crosswalk attributes changed(added/removed/replaced) reconciliation.directionallabel.changed - directionalLabel changed(added/removed/replaced) reconciliation.value.changed - Any attribute changed(added/removed/replaced) reconciliation.other.reason - Non clasified reason - other cases The dashboard consists of a few diagrams:{ENV NAME} Reconciliation reasons - shows the most often existing reasons for reconciliation,Number by country - general number of reconciliation reasons divided by countries,Number by types - shows the general number of reconciliation reasons grouped by MDM object type,Reason list - reconciliation reasons with the number of their occurrences,{ENV NAME} Reconciliation metrics - detail view that shows data generated by Reconciliation Metrics flow. Data has detailed information about what exactly changed on specific MDM object."
},
{
"title": "Prometheus Alerts",
"pageID": "164470107",
"pageLink": "/display/GMDM/Prometheus+Alerts",
"content": "DashboardsThere are 2 dashboards available for problems overview: KarmaGrafana - Alerts Monitoring DashboardAlertsENVNameAlertCause (Expression)TimeSeverityAction to be takenALLMDMhigh_load> 30 load130mwarningDetect why load is increasing. Decrease number of threads on components or turn off some of them.ALLMDMhigh_load> 30 load12hcriticalDetect why load is increasing. Decrease number of threads on components or turn off some of them.ALLMDMmemory_usage>  90% used1hcriticalDetect the component which is causing high memory usage and restart it.ALLMDMdisk_usage< 10% free2mhighRemove or archive old component logs.ALLMDMdisk_usage<  5% free2mcriticalRemove or archive old component logs.ALLMDMkong_processor_usage> 120% CPU used by container10mhighCheck the Kong containerALLMDMcpu_usage> 90% CPU used1hcriticalDetect the cause of high CPU use and take appropriate measuresALLMDMsnowflake_task_not_successful_nprodLast Snowflake task run has state other than "SUCCEEDED"1mhighInvestigate whether the task failed or was skipped, and what caused it.Metric value returned by the alert corresponds to the task state:0 - FAILED1 - SUCCEEDED2 - SCHEDULED3 - SKIPPEDALLMDMsnowflake_task_not_successful_prodLast Snowflake task run has state other than "SUCCEEDED"1mhighInvestigate whether the task failed or was skipped, and what caused it.Metric value returned by the alert corresponds to the task state:0 - FAILED1 - SUCCEEDED2 - SCHEDULED3 - SKIPPEDALLMDMsnowflake_task_not_started_24hSnowflake task has not started in the last 24h (+ 8h scrape time)1mhighInvestigate why the task was not scheduled/did not start.ALLMDMreltio_response_timeReltio response time to entities/get requests is >= 3 sec for 99th percentile20mhighNotify the Reltio Team.NON PRODMDMservice_downup{env!~".*_prod"} == 020mwarningDetect the not working component and start it.NON PRODMDMkafka_streams_client_statekafka streams client state != 21mhighCheck and restart the Callback Service.NON PRODKongkong_database_downKong DB unreachable20mwarningCheck the Kong DB component.NON PRODKongkong_http_500_status_rateHTTP 500 > 10%5mwarningCheck Gateway components' logs.NON PRODKongkong_http_502_status_rateHTTP 502 > 10%5mwarningCheck Kong's port availability.NON PRODKongkong_http_503_status_rateHTTP 503 > 10%5mwarningCheck the Kong component.NON PRODKongkong_http_504_status_rateHTTP 504 > 10%5mwarningCheck Reltio response rates. Check Gateway components for issues.NON PRODKongkong_http_401_status_rateHTTP 401 > 30%20mwarningCheck Kong logs. Notify the authorities in case of suspected break-in attempts.GBL NON PRODKafkainternal_reltio_events_lag_dev> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBL NON PRODKafkainternal_reltio_relations_events_lag_dev> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBL NON PRODKafkainternal_reltio_events_lag_stage> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBL NON PRODKafkainternal_reltio_relations_events_lag_stage> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBL NON PRODKafkainternal_reltio_events_lag_qa> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBL NON PRODKafkainternal_reltio_relations_events_lag_qa> 500 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBL NON PRODKafkakafka_jvm_heap_memory_increasing> 1000MB memory use predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.GBL NON PRODKafkafluentd_dev_kafka_consumer_group_members0 EFK consumergroup members30mhighCheck Fluentd logs. Restart Fluentd.GBLUS NON PRODKafkainternal_reltio_events_lag_gblus_dev> 500 00040minfoCheck why lag is increasing. Restart the Event Publisher.GBLUS NON PRODKafkainternal_reltio_events_lag_gblus_qa> 500 00040minfoCheck why lag is increasing. Restart the Event Publisher.GBLUS NON PRODKafkainternal_reltio_events_lag_gblus_stage> 500 00040minfoCheck why lag is increasing. Restart the Event Publisher.GBLUS NON PRODKafkakafka_jvm_heap_memory_increasing> 3100MB memory use predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.GBLUS NON PRODKafkafluentd_gblus_dev_kafka_consumer_group_members0 EFK consumergroup members30mhighCheck Fluentd logs. Restart Fluentd.GBL PRODMDMservice_downcount(up{env=~"gbl_prod"} == 0) by (env,component) == 15mhighDetect the not working component and start it.GBL PRODMDMservice_downcount(up{env=~"gbl_prod"} == 0) by (env,component) > 15mcriticalDetect the not working component and start it.GBL PRODMDMservice_down_kafka_connect0 Kafka Connect Exporters up in the environment5mcriticalCheck and start the Kafka Connect Exporter.GBL PRODMDMservice_downOne or more Kafka Connect instances down5mcriticalCheck and start he Kafka Connect.GBL PRODMDMdcr_stuck_on_prepared_statusDCR has been PREPARED for 1h1hhighDCR has not been processed downstream. Notify IQVIA.GBL PRODMDMdcr_processing_failureDCR processing failed in the last 24 hoursCheck DCR Service, Wrapper logs.GBL PRODCron Jobsmongo_automated_script_not_startedMongo Cron Job has not started1hhighCheck the MongoDB.GBL PRODKongkong_database_downKong DB unreachable20mwarningCheck the Kong DB component.GBL PRODKongkong_http_500_status_rateHTTP 500 > 10%5mwarningCheck Gateway components' logs.GBL PRODKongkong_http_502_status_rateHTTP 502 > 10%5mwarningCheck Kong's port availability.GBL PRODKongkong_http_503_status_rateHTTP 503 > 10%5mwarningCheck the Kong component.GBL PRODKongkong_http_504_status_rateHTTP 504 > 10%5mwarningCheck Reltio response rates. Check Gateway components for issues.GBL PRODKongkong_http_401_status_rateHTTP 401 > 30%10mwarningCheck Kong logs. Notify the authorities in case of suspected break-in attempts.GBL PRODKafkainternal_reltio_events_lag_prod> 1 000 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBL PRODKafkainternal_reltio_relations_events_lag_prod> 1 000 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBL PRODKafkaprod-out-full-snowflake-all_no_consumersprod-out-full-snowflake-all has lag and has not been consumed for 2 hours1mhighCheck and restart the Kafka Connect Snowflake component.GBL PRODKafkainternal_gw_gcp_events_deg_lag_prod> 50 00030minfoCheck the Map Channel component.GBL PRODKafkainternal_gw_gcp_events_raw_lag_prod> 50 00030minfoCheck the Map Channel component.GBL PRODKafkainternal_gw_grv_events_deg_lag_prod> 50 00030minfoCheck the Map Channel component.GBL PRODKafkainternal_gw_grv_events_deg_lag_prod> 50 00030minfoCheck the Map Channel component.GBL PRODKafkaforwarder_mapp_prod_kafka_consumer_group_membersforwarder_mapp_prod consumer group has 0 members30mcriticalCheck the MAPP Events Forwarder.GBL PRODKafkaigate_prod_kafka_consumer_group_membersigate_prod consumer group members have decreased (still > 20)15minfoCheck the Gateway components.GBL PRODKafkaigate_prod_kafka_consumer_group_membersigate_prod consumer group members have decreased (still > 10)15mhighCheck the Gateway components.GBL PRODKafkaigate_prod_kafka_consumer_group_membersigate_prod consumer group has 0 members15mcriticalCheck the Gateway components.GBL PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group members have decreased (still > 100)15minfoCheck the Hub components.GBL PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group members have decreased (still > 50)15minfoCheck the Hub components.GBL PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group has 0 members15minfoCheck the Hub components.GBL PRODKafkakafka_jvm_heap_memory_increasing> 2100MB memory use on node 1 predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.GBL PRODKafkakafka_jvm_heap_memory_increasing> 2000MB memory use on nodes 2&3 predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher.GBL PRODKafkafluentd_prod_kafka_consumer_group_membersFluentd consumergroup has 0 members30mhighCheck and restart Fluentd.US PRODMDMservice_downBatch Channel is not running5mcriticalStart the Batch ChannelUS PRODMDMservice_down1 component is not running5mhighDetect the not working component and start it.US PRODMDMservice_down>1 component is not running5mcriticalDetect the not working components and start them.US PRODCron Jobsarchiver_not_startedArchiver has not started in 24 hours1hhighCheck the Archiver.US PRODKafkainternal_reltio_events_lag_us_prod> 500 0005mhighCheck why lag is increasing. Restart the Event Publisher.US PRODKafkainternal_reltio_events_lag_us_prod> 1 000 0005mcriticalCheck why lag is increasing. Restart the Event Publisher.US PRODKafkahin_kafka_consumer_lag_us_prod> 100015mcriticalCheck why lag is increasing. Restart the Batch Channel.US PRODKafkaflex_kafka_consumer_lag_us_prod> 100015mcriticalCheck why lag is increasing. Restart the Batch Channel.US PRODKafkasap_kafka_consumer_lag_us_prod> 100015mcriticalCheck why lag is increasing. Restart the Batch Channel.US PRODKafkadea_kafka_consumer_lag_us_prod> 100015mcriticalCheck why lag is increasing. Restart the Batch Channel.US PRODKafkaigate_prod_hco_create_kafka_consumer_group_members>= 30 < 40 and lag > 100015minfoCheck why the number of consumers is decreasing. Restart the Batch Channel.US PRODKafkaigate_prod_hco_create_kafka_consumer_group_members>= 10 < 30 and lag > 100015mhighCheck why the number of consumers is decreasing. Restart the Batch Channel.US PRODKafkaigate_prod_hco_create_kafka_consumer_group_members== 0 and lag > 100015mcriticalCheck why the number of consumers is decreasing. Restart the Batch Channel.US PRODKafkahub_prod_kafka_consumer_group_members>= 30 < 45 and lag > 100015minfoCheck why the number of consumers is decreasing. Restart the Event Publisher.US PRODKafkahub_prod_kafka_consumer_group_members>= 10 < 30 and lag > 100015mhighCheck why the number of consumers is decreasing. Restart the Event Publisher.US PRODKafkahub_prod_kafka_consumer_group_members== 0 and lag > 100015mcriticalCheck why the number of consumers is decreasing. Restart the Event Publisher.US PRODKafkafluentd_prod_kafka_consumer_group_membersEFK consumer group has 0 members30mhighCheck and restart Fluentd.US PRODKafkaflex_prod_kafka_consumer_group_membersFLEX Kafka Connector has 0 consumers10mcriticalNotify the FLEX TeamGBLUS PRODMDMservice_downcount(up{env=~"gblus_prod"} == 0) by (env,component) == 15mhighDetect the not working component and start it.GBLUS PRODMDMservice_downcount(up{env=~"gblus_prod"} == 0) by (env,component) > 15mcriticalDetect the not working component and start it.GBLUS PRODKongkong_database_downKong DB unreachable20mwarningCheck the Kong DB component.GBLUS PRODKongkong_http_500_status_rateHTTP 500 > 10%5mwarningCheck Gateway components' logs.GBLUS PRODKongkong_http_502_status_rateHTTP 502 > 10%5mwarningCheck Kong's port availability.GBLUS PRODKongkong_http_503_status_rateHTTP 503 > 10%5mwarningCheck the Kong component.GBLUS PRODKongkong_http_504_status_rateHTTP 504 > 10%5mwarningCheck Reltio response rates. Check Gateway components for issues.GBLUS PRODKongkong_http_401_status_rateHTTP 401 > 30%10mwarningCheck Kong logs. Notify the authorities in case of suspected break-in attempts.GBLUS PRODKafkainternal_reltio_events_lag_prod> 1 000 00030minfoCheck why lag is increasing. Restart the Event Publisher.GBLUS PRODKafkaigate_async_prod_kafka_consumer_group_membersigate_async_prod consumer group members have decreased (still > 20)15minfoCheck the Gateway components.GBLUS PRODKafkaigate_async_prod_kafka_consumer_group_membersigate_async_prod consumer group members have decreased (still > 10)15mhighCheck the Gateway components.GBLUS PRODKafkaigate_async_prod_kafka_consumer_group_membersigate_async_prod consumer group has 0 members15mcriticalCheck the Gateway components.GBLUS PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group members have decreased (still > 20)15minfoCheck the Hub components.GBLUS PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group members have decreased (still > 10)15mhighCheck the Hub components.GBLUS PRODKafkahub_prod_kafka_consumer_group_membershub_prod consumer group has 0 members15mcriticalCheck the Hub components.GBLUS PRODKafkabatch_service_prod_kafka_consumer_group_membersbatch_service_prod consumer group has 0 members15mcriticalCheck the Batch Service component.GBLUS PRODKafkabatch_service_prod_ack_kafka_consumer_group_membersbatch_service_prod_ack consumer group has 0 members15mcriticalCheck the Batch Service component.GBLUS PRODKafkafluentd_gblus_prod_kafka_consumer_group_membersEFK consumer group has 0 members30mhighCheck Fluentd. Restart if necessary.GBLUS PRODKafkakafka_jvm_heap_memory_increasing> 3100MB memory use predicted in 5 hours20mhighCheck if Kafka is rebalancing. Check the Event Publisher."
},
{
"title": "Security",
"pageID": "164470097",
"pageLink": "/display/GMDM/Security",
"content": "\nThere are following aspects supporting security implemented in the solution:\n\n\tAll server nodes are in COMPANY VPN.\n\tExternal endpoints (Kafka, KONG API) are exposed to cloud services (MAP, Appian) through the AWS ELB.\n\tEach endpoint has secured transport established using TLS 1.2 see Transport section.\n\tOnly authenticated clients can access MDM services.\n\tAccess to resources is controlled by built-in authorization process.\n\tEvery API call is logged in access log. It is a standard Nginx access log format.\n\n"
},
{
"title": "Authentication",
"pageID": "164470075",
"pageLink": "/display/GMDM/Authentication",
"content": "\nAPI Authentication\nAPI authentication is provided by KONG. There are two methods supported:\n\n\tOAuth2 internal\n\tOAuth 2 external Ping Federate (recommended)\n\tAPI key\n\n\n\nOAuth2 method is recommended, especially for cloud services. The gateway uses Client Credentials grant type variant of OAuth2. The method is supported by KONG OAuth2 plugin. Client secrets are managed by Kong and stored in Cassandra configuration database.\nAPI key authentication is a deprecated method, its usage should be avoided for new services. Keys are unique, randomly generated with 32 characters length managed by Kong Gateway please see Kong Gateway documentation for details."
},
{
"title": "Authorization",
"pageID": "164470078",
"pageLink": "/display/GMDM/Authorization",
"content": "\nRest APIs\nAccess to exposed services is controlled with the following algorithm:\n\n\tREST channel component reads user authorization configuration based on X-Consumer-Username header passed by KONG.\n\tAuthorization configuration contains:\n\t\n\t\tList of roles user can access. Roles express operation/logic user can execute.\n\t\tList of countries user can read or write.\n\t\tList of source systems (related to crosswalk type) that data can come from.\n\t\n\t\n\tOperation level authorization system checks if user can execute an operation.\n\tData level authorization system checks if user can read or modify entities:\n\t\n\t\tDuring read operation by crosswalk it is checked if country attribute value is on the allowed country list, otherwise system throws access forbidden error.\n\t\tDuring search operation, filter is modified restriction on country attribute are added) to limit countries user has no access to.\n\t\tDuring write operation, system validates if country attribute and crosswalk type are authorized.\n\n\nTable 12. Role definitions\n \n\n\nRole name\nDescription\n\n\nPOST_HCP\nAllows user to create a new HCP entity\n\n\nPATCH_HCP\nAllows user to update HCP entity\n\n\nPOST_HCO\nAllows user to create a new HCO entity\n\n\nPATCH_HCO\nAllows user to update HCO entity\n\n\nGET_ENTITY\nAllows user to get data of single Entity, specified by ID\n\n\nSEARCH_ENTITY\nAllows user to search for Entities by search criteria\n\n\nRESPONSE_DCR\nAllows user to send DCR response to Gateway\n\n\nDELETE_CROSSWALK\nAllows user to delete crosswalk, effectively removing one datasource from Entity\n\n\nGET_LOV\nAllows user to get dictionary data (LookupValues)\n\n\n\nSample authorization configuration for user:\n \nKafka\nKAFKA resources are protected by ACL mechanism, clients are granted permission to read only from topics dedicated to them. Complexity of Kafka ACL is hidden behind Ansible permissions are defined in YAML file, in the following format:\n \nType and description of each parameter is specified in table below.\n\n\nTable 13. Topic configuration parameters\n \n \n\n\nParameter\nType\nDescription \n\n\nname\nString\nTopic name\n\n\npartitions\nInteger\nNumber of partitions to create\n\n\nreplicas\nInteger\nReplication factor for partitions\n\n\nproducers\nList of String\nList of usernames that are allowed to publish message to this topic\n\n\nconsumers\nMap of String, String\nConsumers that are allowed to consume from this topic. Map entries are in format "username":"consumer_group_id"\n\n\n\n\t\n\t\n"
},
{
"title": "KONG external OAuth2 plugin",
"pageID": "164470072",
"pageLink": "/display/GMDM/KONG+external+OAuth2+plugin",
"content": "\nTo integrate with Ping Federate token validation process, external KONG plugin was implemented. Source code and instructions for installation and configuration of local environment were published on GitHub. \nCheck https://github.com/COMPANY/mdm-gateway/tree/kong/mdm-external-oauth-plugin readme file for more information.\nThe role of plugin: \nValidate access tokens sent by developers using a third-party OAuth 2.0 Authorization Server (RFC 7662). The flow of plugin, request, and response from PingFedarate have to be compatible with RFC 7622 specification. To get more information about this specification check https://tools.ietf.org/html/rfc7662 .Plugin assumes that the Consumer already has an access token that will be validated against a third-party OAuth 2.0 server Ping Federate. \nFlow of the plugin:\n\n\tClient invokes Gateway API providing token generated from PING API\n\tKONG plugin introspects this token\n\t\n\t\tif the token is active, plugin will fill X-Consumer-Username header\n\t\tif the token is not active, the access to the specific uri will be forbidden\n\t\n\t\n\n\n\n \nExample External Plugin configuration:\n \n\nTo define a mdm-external-oauth plugin the following parameters have to be defined:\n\n\tintrospection_url url address to ping federate API with access to introspect oauth2 tokens\n\tauthorization_value username and ●●●●●●●●●●●●●●●● to "Basic <value>" format which is authorized to invoke introspect API.\n\thide_credentials if true, the token provided in request will be removed from request after validation to obtain more security specifications.\n\tusers_map this map contains comma separated list of values. The first value is user name defined in Ping Federate the second value separated by colon is the user name defined in mdm-manager application. This map is used to correctly map and validate tokens received in request. Additionally, when PingFederate introspect token, it returns the username. This username is mapped on existing user in mdm-manager, so there is no need to define additional users in mdm-manager it is enough to fill users_map configuration with appropriate values.\n\n\n\nKAFKA authentication\nKafka access is protected using SASL framework. Clients are required to specify user and ●●●●●●●●●●● the configuration. Credentials are sent over TLS transport."
},
{
"title": "Transport",
"pageID": "164470076",
"pageLink": "/display/GMDM/Transport",
"content": "\nCommunication between the KONG API Gateway and external systems is secured by setting up an encrypted connection with the following specifications:\n\n\tCiphersuites: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCMSHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\n\tVersions: TLSv1.2\n\tTLS curves: prime256v1, secp384r1, secp521r1\n\tCertificate type: ECDSA\n\tCertificate curve: prime256v1, secp384r1, secp521r1\n\tCertificate signature: sha256WithRSAEncryption, ecdsa-with-SHA256, ecdsa-with-SHA384, ecdsa-with-SHA512\n\tRSA key size: 2048 (if not ecdsa)\n\tDH Parameter size: None (disabled entirely)\n\tECDH Parameter size: 256\n\tHSTS: max-age=15768000\n\tCertificate switching: None\n\n\n\n"
},
{
"title": "User management",
"pageID": "164470079",
"pageLink": "/display/GMDM/User+management",
"content": "\nUser accounts are managed by the respective components of the Gateway and Hub. \nAPI Users\nThose are managed by Kong Gateway and stored in Cassandra database. There are two ways of adding a new user to Kong configuration:\n\n\tUsing configuration repository and Ansible\n\n\n\nAnsible tool, which is used to deploy MDM Integration Services, has a plugin that supports Kong user management. User configuration is kept in YAML configuration files (passwords being encrypted using built-in AES-256 encryption). Adding a new user requires adding the following section to the appropriate configuration file:\n \n\n\tDirectly, using Kong REST API\n\n\n\nThis method requires access to COMPANY VPN and to machine that hosts the MDM Integration Services, since REST endpoints are only bound to "localhost", and not exposed to the outside world. URL of the endpoint is:\n It can be accessed via cURL commandline tool. To list all the users that are currently defined use the following command:\n \nTo create a new user:\n To set an API Key for the user:\n A new API key will be automatically generated by Kong and returned in response.\nTo create OAuth2 credentials use the following call instead:\n client_id and client_secret are login credentials, redirect_uri should point to HUB API endpoint. Please see Kong Gateway documentation for details.\n\nKAFKA users\nKafka users are managed by brokers. Authentication method used is Java Authentication and Authorization Service (JAAS) with PlainLogin module. User configuration is stored inside kafka_server_jaas.conf file, that is present in each broker. File has the following structure:\n \nProperties "username" and "password" define credentials to use to secure inter-broker communication. Properties in format "user_<username>" are actual definitions of users. So, adding a new user named "bob" would require addition of the following property to kafka_server_jaas.conf file:\n\n \n\nCAUTION! Since JAAS configuration file is only read on Kafka broker startup, adding a new user requires restart of all brokers. In multi-broker environment this can be achieved by restarting one broker at a time, which should be transparent for end users, given Kafka fault-tolerance capabilities. This limitation might be overcome in future versions by using external user store or custom login module, instead of PlainLoginModule.The process of adding this entry and distributing kafka_server_jass.conf file is automated with Ansible: usernames and ●●●●●●●●●●●● kept in YAML configuration file, encrypted using Ansible Vault (with AES encryption). \nMongoDB users\nMongoDB is used only internally, by Publishing Hub modules and is not exposed to external users, therefore there is no need to create accounts for them. For operational purposes there might be some administration/technical accounts created using standard Mongo commandline tools, as described in MongoDB documentation."
},
{
"title": "SOP HUB",
"pageID": "164470101",
"pageLink": "/display/GMDM/SOP+HUB",
"content": ""
},
{
"title": "Hub Configuration",
"pageID": "302705379",
"pageLink": "/display/GMDM/Hub+Configuration",
"content": ""
},
{
"title": "APM:",
"pageID": "302703254",
"pageLink": "/pages/viewpage.action?pageId=302703254",
"content": ""
},
{
"title": "Setup APM integration in Kibana",
"pageID": "302703256",
"pageLink": "/display/GMDM/Setup+APM+integration+in+Kibana",
"content": "To setup APM integration in Kibana you need to deploy fleet server first. To do so you need to enable it in mdm-hub-cluster-env repository(eg. in emea/nprod/namespaces/emea-backend/values.yaml)After deploying it open kibana UI. And got to Fleet.Verify if fleet-server is properly configured:Go to Observability - APMClick Add the APM IntegrationClick Add Elastic APMChange host to 0.0.0.0:8200In section 2 choose Existing hosts and choose desired agent-policy(Fleet server on ECK policy)Save changesAfter configuring your service to connect to apm-server it should be visible in Observability.APM"
},
{
"title": "Consul:",
"pageID": "302705585",
"pageLink": "/pages/viewpage.action?pageId=302705585",
"content": ""
},
{
"title": "Updating Dictionary",
"pageID": "164470212",
"pageLink": "/display/GMDM/Updating+Dictionary",
"content": "To update dictionary from excelConvert excel to csv formatChange EOL to Unix Put file in appropriate path in mdm-config-registry repository in config-extCheck Updating ETL Dictionaries in Consul page for appropriate Consul UI URL (You need to have a security token set in ACL section)"
},
{
"title": "Updating ETL Dictionaries in Consul",
"pageID": "164470102",
"pageLink": "/display/GMDM/Updating+ETL+Dictionaries+in+Consul",
"content": "Configuration repository has dedicated directories that store dictionaries used by the ETL engine during loading data with batch service. The content of directories is published in Consul. The table shows the dir name and consul's key under which data in posted:Dir nameConsul keyconfig-ext/dev_gblushttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/dev_gblus/config-ext/qa_gblushttps://consul-amer-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/qa_gblus/config-ext/prod_gblushttps://consul-amer-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/prod_gblus/config-ext/dev_emeahttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/dev_emea/config-ext/qa_emeahttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/qa_emea/config-ext/stage_emeahttps://consul-emea-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/stage_emea/config-ext/prod_emeahttps://consul-emea-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/prod_emea/config-ext/dev_apachttps://consul-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/dev_apac/config-ext/qa_apachttps://consul-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/qa_apac/config-ext/stage_apachttps://consul-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/stage_apac/config-ext/prod_apachttps://consul-apac-prod-gbl-mdm-hub.COMPANY.com/ui/dc1/kv/prod_apac/To update Consul values you have to:Make changes in the desired directory and push them to the master git branch,git2consul will synchronize the git repo to Consul Please be advised that proper SecretId token is required to access key/value path you desire. Especially important for AMER/GBLUS directories. "
},
{
"title": "Environment Setup:",
"pageID": "164470244",
"pageLink": "/pages/viewpage.action?pageId=164470244",
"content": ""
},
{
"title": "Configuration (amer k8s)",
"pageID": "228917406",
"pageLink": "/pages/viewpage.action?pageId=228917406",
"content": "Configuration steps:Configure mongo permissions for users mdm_batch_service, mdmhub, and mdmgw. Add permissions to database schema related to new environment:---users:  mdm_batch_service:    mongo:      databases:        reltio_amer-dev:          roles:            - "readWrite"        reltio_[tenant-env]:             - "readWrite"2. Add directory with environment configuration files in amer/nprod/namespaces/. You can just make a copy of the existing amer-dev configuration.3. Change file [tenant-env]/values.yaml:Change the value of "env" property,Change the value of "logging_index" property,Change the address of oauth service - "kong_plugins.mdm_external_oauth.introspection_url" property. Use value from below table:Env classoAuth introspection URLDEVhttps://devfederate.COMPANY.com/as/introspect.oauth2QAhttps://devfederate.COMPANY.com/as/introspect.oauth2STAGEhttps://stgfederate.COMPANY.com/as/introspect.oauth2PRODhttps://prodfederate.COMPANY.com/as/introspect.oauth24. Change file [tenant-env]/kafka-topics.yaml by changing the prefix of topic names.5. Add kafka connect instance for newly added environment - add the configuration section to kafkaConnect property located in amer/nprod/namespaces/amer-backend/values.yaml5.1 Add secrets - kafkaConnect.[tenant-env].connectors.[tenant-env]-snowflake-sink-connector.spec.config.snowflake.private.key.passphrase and kafkaConnect.[tenant-env].connectors.[tenant-env]-snowflake-sink-connector.spec.config.snowflake.private.key6. Configure Consul (amer/nprod/namespaces/amer-backend/values.yaml and amer/nprod/namespaces/amer-backend/secrets.yaml):Add repository to git2consul - property git2consul.repos,Add policies - property consul_acl.policies,And policy binding - property consul_acl.tokens.mdmetl-token.policiesAdd secrets - git2consul.repos.[tenant-env].credentials.username: and git2consul.repos.[tenant-env].credentials.passwordCreate proper branch in mdm-hub-env-config repo, like in an example: config/dev_amer - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse?at=refs%2Fheads%2Fconfig%2Fdev_amer7. Modify components configuration:Change [tenant-env]/config_files/all/config/application.yamlchange "env" property,change "mdmConfig.baseURL" property,change "mdmConfig.rdmURL" property,change "mdmConfig.workflow.url" property,Change [tenant-env]/config_files/event-publisher/config/application.yaml:Change "local_env" propertyChange [tenant-env]/config_files/reltio-subscriber/config/application.yaml:Change "sqs" properties according to Reltio configuration,check and confirm if secrets for this component needn't be changed - changing of sqs queue could cause changing of AWS credentials - verify with Reltio's tenant configuration,Change [tenant-env]/config_files/mdm-manager/config/application.yaml:Change "mdmAsyncAPI.principalMappings" according the correct topic names.COMPANY Reltio tenants details for the above properties:8. Add transaction topics in fluentd configuration - amer/nprod/namespaces/amer-backend/values.yaml and change fluentd.kafka.topics list.9. Monitoringa) Add additional service monitor to amer/nprod/namespaces/monitoring/service-monitors.yaml configuration file:- namespace: [tenant-env]  name: sm-[tenant-env]-services  selector:    matchLabels:      prometheus: [tenant-env]-services  endpoints:    - port: prometheus      interval: 30s      scrapeTimeout: 30s    - port: prometheus-fluent-bit      path: "/api/v1/metrics/prometheus"      interval: 30s      scrapeTimeout: 30sb) Add Snowflake database details to amer/nprod/namespaces/monitoring/jdbc-exporter.yaml configuration file:jdbcExporters: amer-dev: db: url: "jdbc:snowflake://amerdev01.us-east-1.privatelink.snowflakecomputing.com/?db=COMM_AMER_MDM_DMART_DEV_DB&role=COMM_AMER_MDM_DMART_DEV_DEVOPS_ROLE&warehouse=COMM_MDM_DMART_WH" username: "[ USERNAME ]"Add ●●●●●●●●●●● amer/nprod/namespaces/monitoring/secrets.yamljdbcExporters: amer-dev: db: password: "[ ●●●●●●●●●●●10. Run Jenkins job responsible for deploying backend services - to apply mongo and fluentd changes.11. Connect to mongodb server and create scheme reltio_[tenant-env].11.1 Create collections and indexes in the newly added schemas: Intellishelldb.createCollection("entityHistory") db.entityHistory.createIndex({country: -1},  {background: true, name:  "idx_country"});db.entityHistory.createIndex({sources: -1},  {background: true, name:  "idx_sources"});db.entityHistory.createIndex({entityType: -1},  {background: true, name:  "idx_entityType"});db.entityHistory.createIndex({status: -1},  {background: true, name:  "idx_status"});db.entityHistory.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});db.entityHistory.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});db.entityHistory.createIndex({"entity.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});db.entityHistory.createIndex({"entity.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});db.entityHistory.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});db.entityHistory.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});db.entityHistory.createIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});db.entityHistory.createIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"});db.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});db.createCollection("entityRelations")db.entityRelations.createIndex({country: -1},  {background: true, name:  "idx_country"});db.entityRelations.createIndex({sources: -1},  {background: true, name:  "idx_sources"});db.entityRelations.createIndex({relationType: -1},  {background: true, name:  "idx_relationType"});db.entityRelations.createIndex({status: -1},  {background: true, name:  "idx_status"});db.entityRelations.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});db.entityRelations.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});db.entityRelations.createIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});db.entityRelations.createIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});db.entityRelations.createIndex({"relation.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});   db.entityRelations.createIndex({"relation.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});   db.entityRelations.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});   db.entityRelations.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"}); db.createCollection("LookupValues")db.LookupValues.createIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});db.LookupValues.createIndex({countries: 1},  {background: true, name:  "idx_countries"});db.LookupValues.createIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});db.LookupValues.createIndex({type: 1},  {background: true, name:  "idx_type"});db.LookupValues.createIndex({code: 1},  {background: true, name:  "idx_code"});db.LookupValues.createIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});db.createCollection("ErrorLogs")db.ErrorLogs.createIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});db.ErrorLogs.createIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});db.ErrorLogs.createIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});db.ErrorLogs.createIndex({status: -1},  {background: true, name:  "idx_status_-1"});db.createCollection("batchEntityProcessStatus")db.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});db.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});db.createCollection("batchInstance")db.createCollection("relationCache")db.relationCache.createIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});db.createCollection("DCRRequests")db.DCRRequests.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});db.DCRRequests.createIndex({entityURI: -1, "status.name": -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});db.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});db.createCollection("entityMatchesHistory")db.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});db.createCollection("DCRRegistry")db.DCRRegistry.createIndex({"status.changeDate": -1},  {background: true, name:  "idx_changeDate_FindDCRsBy"});db.DCRRegistry.createIndex({extDCRRequestId: -1},  {background: true, name:  "idx_extDCRRequestId_FindByExtId"});db.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});db.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});db.createCollection("sequenceCounters")db.sequenceCounters.insertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong([sequence start number])}) //NOTE!!!! replace text [sequence start count] with value from below tableRegionSeq start numberemea5000000000amer6000000000apac700000000012. Run Jenkins job to deploy kafka resources and mdmhub components for the new environment.13. Create paths on S3 bucket required by Snowflake and Airflow's DAGs.14. Configure Kibana:Add index patterns,Configure retention,Add dashboards.15. Configure basic Airflow DAGs (ansible directory):export_merges_from_reltio_to_s3_full,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_snowflake.16. Deploy DAGs (NOTE: check if your kubectl is configured to communicate with the cluster you wanted to change):ansible-playbook install_mdmgw_airflow_services_k8s.yml -i inventory/[tenant-env]/inventory17. Configure Snowflake for the [tenant-env] in mdm-hub-env-config as in example inventory/dev_amer/group_vars/snowflake/*. Verification pointsCheck Reltio's configuration - get reltio tenant configuration:Check if you are able to execute Reltio's operations using credentials of the service user,Check if streaming processing is enable - streamingConfig.messaging.destinations.enabled = true, streamingConfig.streamingEnabled=true, streamingConfig.streamingAPIEnabled=true,Check if cassanda export is configured - exportConfig.smartExport.secondaryDsEnabled = false.Check Kafka:Check if you are able to connect to kafka server using command line client running from your local machine.Check Mongo:Users mdmgw, mdmhub and mdm_batch_service - permissions for the newly added database (readWrite),Indexes,Verify if correct start value is set for sequance COMPANYAddressIDSeq - collection sequenceCounters _id = COMPANYAddressIDSeq.Check MDMHUB API:Check mdm-manager API with apikey authentication by executing one of read operations: GET {{ manager_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP'). The request should execute properly (HTTP status code 200) and returns some HCP objects. The empty response is also possible in the case when there is no HCP data in Reltio,Run the same operation using oAuth2 authentication - remember that the manager url is different,Check mdm-manager API with apikey authentication by executing write operation:curl --location --request POST '{{ manager_url }}/hcp' \\--header 'apikey: {{ api_key }}' \\--header 'Content-Type: application/json' \\--data-raw '{  "hcp" : {    "type" : "configuration/entityTypes/HCP",    "attributes" : {      "Country" : [ {        "value" : "{{ country }}"      } ],      "FirstName" : [ {        "value" : "Verification Test MDMHUB"      } ],      "LastName" : [ {        "value" : "Verification Test MDMHUB"      } ]    },    "crosswalks" : [ {      "type" : "configuration/sources/{{ source }}",      "value" : "verification_test_mdmhub"    } ]  }}'Replace all placeholders in the above request using the correct values for the configured environment. The response should return HTTP code 200 and a URI of the created object. After verification deleted created object by running: curl --location --request DELETE '{{ manager_url }}/entities/crosswalk?type={{ source }}&value=verification_test_mdmhub' --header 'apikey: {{ api_key }}'Run the same operations using oAuth2 authentication - remember that the mdm manager url is different,Verify api-router API with apikey authentication using search operation: GET {{ api_router_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP'). The request should execute properly (HTTP status code 200) and returns some HCP objects. Empty response is also possible in the case when there is no HCP data in Reltio,Check api-router API with apikey authentication by executing write operation:curl --location --request POST '{{ api_router_url }}/hcp' \\--header 'apikey: {{ api_key }}' \\--header 'Content-Type: application/json' \\--data-raw '{  "hcp" : {    "type" : "configuration/entityTypes/HCP",    "attributes" : {      "Country" : [ {        "value" : "{{ country }}"      } ],      "FirstName" : [ {        "value" : "Verification Test MDMHUB"      } ],      "LastName" : [ {        "value" : "Verification Test MDMHUB"      } ]    },    "crosswalks" : [ {      "type" : "configuration/sources/{{ source }}",      "value" : "verification_test_mdmhub"    } ]  }}'Replace all placeholders in the above request using the correct values for the configured environment. The response should return HTTP code 200 and a URI of the created object. After verification deleted created object by running: curl --location --request DELETE '2/entities/crosswalk?type={{ source }}&value=verification_test_mdmhub' --header 'apikey: {{ api_key }}'Run the same operations using oAuth2 authentication - remember that the api router url is different,Check batch service API with apikey authentication by executing following operation GET {{ batch_service_url }}/batchController/NA/instances/NA. The request should return 403 HTTP Code and body:{    "code": "403",    "message": "Forbidden: com.COMPANY.mdm.security.AuthorizationException: Batch 'NA' is not allowed."}The request doesn't create any batch.Run the same operation using oAuth2 authentication - remember that the batch service url is different,Verify of component logs: mdm-manager, api-router and batch-service url. Focus on errors and kafka records - rebalancing, authorization problems, topic existence warnings etc.MDMHUB streaming services:Check logs of reltio-subscriber, entity-enricher, callback-service, event-publisher and mdm-reconciliation-service components. Verify if there is no errors and kafka warnings related with rebalancing, authorization problems, topic existence warnings etc,Verify if lookup refresh process is working properly - check existance of mongo collection LookupValues. It should have data,Airflow:Check if DAGs are enabled and have a defined schedule,Run DAGs: export_merges_from_reltio_to_s3_full_{{ env }}, hub_reconciliation_v2_{{ env }}, lookup_values_export_to_s3_{{ env }}, reconciliation_snowflake_{{ env }}.Wait for their finish and validate results.Snowflake:Check snowflake connector logs,Check if tables HUB_KAFKA_DATA, LOV_DATA, MERGE_TREE_DATA exist at LANDING schama and has data,Verify if mdm-hub-snowflake-dm package is deployed,What else?Monitoring:Check grafana dashboards:HUB Performance,Kafka Topics Overview,Host Statistics,JMX Overview,Kong,MongoDB.Check Kibana index patterns:{{env}}-internal-batch-efk-transactions*,{{env}}-internal-gw-efk-transactions*,{{env}}-internal-publisher-efk-transactions*,{{env}}-internal-subscriber-efk-transactions*,{{env}}-mdmhub,Check Kibana dashboards:{{env}} API calls,{{env}} Batch Instances,{{env}} Batch loads,{{env}} Error Logs Overview,{{env}} Error Logs RDM,{{env}} HUB Store{{env}} HUB events,{{env}} MDM Events,{{env}} Profile Updates,Check alerts - How?"
},
{
"title": "Configuration (amer prod k8s)",
"pageID": "234691394",
"pageLink": "/pages/viewpage.action?pageId=234691394",
"content": "Configuration steps:Copy mdm-hub-cluster-env/amer/nprod directory into mdm-hub-cluster-env/amer/nprod directory.Replace ...CertificatesGenerate private-keys, CSRs and request Kong certificate (kong/config_files/certs).\nmarek@CF-19CHU8:~$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout api-amer-prod-gbl-mdm-hub.COMPANY.com.key -out api-amer-prod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n.....+++++\n.....................................................+++++\nwriting new private key to 'api-amer-prod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []: api-amer-prod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588632">●●●●●●●●●●●●</a>\nAn optional company name []:\nGenerate private-keys, CSRs and request Kafka certificate (apac-backend/secret.yaml\nmarek@CF-19CHU8:~$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout kafka-amer-prod-gbl-mdm-hub.COMPANY.com.key -out kafka-amer-prod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n..........................+++++\n.....+++++\nwriting new private key to 'kafka-amer-prod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:kafka-amer-prod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588633">●●●●●●●●●●●●</a>\nAn optional company name []:\nBELOW IS AMER NPROD COPY WE USE AS A REFERENCEConfiguration steps:Configure mongo permissions for users mdm_batch_service, mdmhub, and mdmgw. Add permissions to database schema related to new environment:---users:  mdm_batch_service:    mongo:      databases:        reltio_amer-dev:          roles:            - "readWrite"        reltio_[tenant-env]:             - "readWrite"2. Add directory with environment configuration files in amer/nprod/namespaces/. You can just make a copy of the existing amer-dev configuration.3. Change file [tenant-env]/values.yaml:Change the value of "env" property,Change the value of "logging_index" property,Change the address of oauth service - "kong_plugins.mdm_external_oauth.introspection_url" property. Use value from below table:Env classoAuth introspection URLDEVhttps://devfederate.COMPANY.com/as/introspect.oauth2QAhttps://devfederate.COMPANY.com/as/introspect.oauth2STAGEhttps://stgfederate.COMPANY.com/as/introspect.oauth2PRODhttps://prodfederate.COMPANY.com/as/introspect.oauth24. Change file [tenant-env]/kafka-topics.yaml by changing the prefix of topic names.5. Add kafka connect instance for newly added environment - add the configuration section to kafkaConnect property located in amer/nprod/namespaces/amer-backend/values.yaml5.1 Add secrets - kafkaConnect.[tenant-env].connectors.[tenant-env]-snowflake-sink-connector.spec.config.snowflake.private.key.passphrase and kafkaConnect.[tenant-env].connectors.[tenant-env]-snowflake-sink-connector.spec.config.snowflake.private.key6. Configure Consul (amer/nprod/namespaces/amer-backend/values.yaml and amer/nprod/namespaces/amer-backend/secrets.yaml):Add repository to git2consul - property git2consul.repos,Add policies - property consul_acl.policies,And policy binding - property consul_acl.tokens.mdmetl-token.policiesAdd secrets - git2consul.repos.[tenant-env].credentials.username: and git2consul.repos.[tenant-env].credentials.passwordCreate proper branch in mdm-hub-env-config repo, like in an example: config/dev_amer - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse?at=refs%2Fheads%2Fconfig%2Fdev_amer7. Modify components configuration:Change [tenant-env]/config_files/all/config/application.yamlchange "env" property,change "mdmConfig.baseURL" property,change "mdmConfig.rdmURL" property,change "mdmConfig.workflow.url" property,Change [tenant-env]/config_files/event-publisher/config/application.yaml:Change "local_env" propertyChange [tenant-env]/config_files/reltio-subscriber/config/application.yaml:Change "sqs" properties according to Reltio configuration,check and confirm if secrets for this component needn't be changed - changing of sqs queue could cause changing of AWS credentials - verify with Reltio's tenant configuration,Change [tenant-env]/config_files/mdm-manager/config/application.yaml:Change "mdmAsyncAPI.principalMappings" according the correct topic names.COMPANY Reltio tenants details for the above properties:8. Add transaction topics in fluentd configuration - amer/nprod/namespaces/amer-backend/values.yaml and change fluentd.kafka.topics list.9. Monitoringa) Add additional service monitor to amer/nprod/namespaces/monitoring/service-monitors.yaml configuration file:- namespace: [tenant-env]  name: sm-[tenant-env]-services  selector:    matchLabels:      prometheus: [tenant-env]-services  endpoints:    - port: prometheus      interval: 30s      scrapeTimeout: 30s    - port: prometheus-fluent-bit      path: "/api/v1/metrics/prometheus"      interval: 30s      scrapeTimeout: 30sb) Add Snowflake database details to amer/nprod/namespaces/monitoring/jdbc-exporter.yaml configuration file:jdbcExporters: amer-dev: db: url: "jdbc:snowflake://amerdev01.us-east-1.privatelink.snowflakecomputing.com/?db=COMM_AMER_MDM_DMART_DEV_DB&role=COMM_AMER_MDM_DMART_DEV_DEVOPS_ROLE&warehouse=COMM_MDM_DMART_WH" username: "[ USERNAME ]"Add ●●●●●●●●●●● amer/nprod/namespaces/monitoring/secrets.yamljdbcExporters: amer-dev: db: password: "[ ●●●●●●●●●●●10. Run Jenkins job responsible for deploying backend services - to apply mongo and fluentd changes.11. Connect to mongodb server and create scheme reltio_[tenant-env].11.1 Create collections and indexes in the newly added schemas: Intellishelldb.createCollection("entityHistory") db.entityHistory.createIndex({country: -1},  {background: true, name:  "idx_country"});db.entityHistory.createIndex({sources: -1},  {background: true, name:  "idx_sources"});db.entityHistory.createIndex({entityType: -1},  {background: true, name:  "idx_entityType"});db.entityHistory.createIndex({status: -1},  {background: true, name:  "idx_status"});db.entityHistory.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});db.entityHistory.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});db.entityHistory.createIndex({"entity.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});db.entityHistory.createIndex({"entity.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});db.entityHistory.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});db.entityHistory.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"});db.entityHistory.createIndex({entityChecksum: -1},  {background: true, name:  "idx_entityChecksum"});db.entityHistory.createIndex({parentEntityId: -1},  {background: true, name:  "idx_parentEntityId"});db.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});db.createCollection("entityRelations")db.entityRelations.createIndex({country: -1},  {background: true, name:  "idx_country"});db.entityRelations.createIndex({sources: -1},  {background: true, name:  "idx_sources"});db.entityRelations.createIndex({relationType: -1},  {background: true, name:  "idx_relationType"});db.entityRelations.createIndex({status: -1},  {background: true, name:  "idx_status"});db.entityRelations.createIndex({creationDate: -1},  {background: true, name:  "idx_creationDate"});db.entityRelations.createIndex({lastModificationDate: -1},  {background: true, name:  "idx_lastModificationDate"});db.entityRelations.createIndex({startObjectId: -1},  {background: true, name:  "idx_startObjectId"});db.entityRelations.createIndex({endObjectId: -1},  {background: true, name:  "idx_endObjectId"});db.entityRelations.createIndex({"relation.crosswalks.value": 1},  {background: true, name:  "idx_crosswalks_v_asc"});   db.entityRelations.createIndex({"relation.crosswalks.type": 1},  {background: true, name:  "idx_crosswalks_t_asc"});   db.entityRelations.createIndex({forceModificationDate: -1},  {background: true, name:  "idx_forceModificationDate"});   db.entityRelations.createIndex({mdmSource: -1},  {background: true, name:  "idx_mdmSource"}); db.createCollection("LookupValues")db.LookupValues.createIndex({updatedOn: 1},  {background: true, name:  "idx_updatedOn"});db.LookupValues.createIndex({countries: 1},  {background: true, name:  "idx_countries"});db.LookupValues.createIndex({mdmSource: 1},  {background: true, name:  "idx_mdmSource"});db.LookupValues.createIndex({type: 1},  {background: true, name:  "idx_type"});db.LookupValues.createIndex({code: 1},  {background: true, name:  "idx_code"});db.LookupValues.createIndex({valueUpdateDate: 1},  {background: true, name:  "idx_valueUpdateDate"});db.createCollection("ErrorLogs")db.ErrorLogs.createIndex({plannedResubmissionDate: -1},  {background: true, name:  "idx_plannedResubmissionDate_-1"});db.ErrorLogs.createIndex({timestamp: -1},  {background: true, name:  "idx_timestamp_-1"});db.ErrorLogs.createIndex({exceptionClass: 1},  {background: true, name:  "idx_exceptionClass_1"});db.ErrorLogs.createIndex({status: -1},  {background: true, name:  "idx_status_-1"});db.createCollection("batchEntityProcessStatus")db.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1},  {background: true, name:  "idx_findByBatchNameAndSourceId"});db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1},  {background: true, name:  "idx_EntitiesUnseen_SoftDeleteJob"});db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResult_ProcessingJob"});db.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1},  {background: true, name:  "idx_ProcessingResultAll_ProcessingJob"});db.createCollection("batchInstance")db.createCollection("relationCache")db.relationCache.createIndex({startSourceId: -1},  {background: true, name:  "idx_findByStartSourceId"});db.createCollection("DCRRequests")db.DCRRequests.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});db.DCRRequests.createIndex({entityURI: -1, "status.name": -1},  {background: true, name:  "idx_entityURIStatusNameFind_SubmitVR"});db.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});db.createCollection("entityMatchesHistory")db.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1},  {background: true, name:  "idx_findAutoLinkMatch_CleanerStream"});db.createCollection("DCRRegistry")db.DCRRegistry.createIndex({"status.changeDate": -1},  {background: true, name:  "idx_changeDate_FindDCRsBy"});db.DCRRegistry.createIndex({extDCRRequestId: -1},  {background: true, name:  "idx_extDCRRequestId_FindByExtId"});db.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1},  {background: true, name:  "idx_changeRequestURIStatusNameFind_DSResponse"});db.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1},  {background: true, name:  "idx_typeStatusNameFind_TraceVR"});db.createCollection("sequenceCounters")db.sequenceCounters.insertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong([sequence start number])}) //NOTE!!!! replace text [sequence start count] with value from below tableRegionSeq start numberemea5000000000amer6000000000apac700000000012. Run Jenkins job to deploy kafka resources and mdmhub components for the new environment.13. Create paths on S3 bucket required by Snowflake and Airflow's DAGs.14. Configure Kibana:Add index patterns,Configure retention,Add dashboards.15. Configure basic Airflow DAGs (ansible directory):export_merges_from_reltio_to_s3_full,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_snowflake.16. Deploy DAGs (NOTE: check if your kubectl is configured to communicate with the cluster you wanted to change):ansible-playbook install_mdmgw_airflow_services_k8s.yml -i inventory/[tenant-env]/inventory17. Configure Snowflake for the [tenant-env] in mdm-hub-env-config as in example inventory/dev_amer/group_vars/snowflake/*. Verification pointsCheck Reltio's configuration - get reltio tenant configuration:Check if you are able to execute Reltio's operations using credentials of the service user,Check if streaming processing is enable - streamingConfig.messaging.destinations.enabled = true, streamingConfig.streamingEnabled=true, streamingConfig.streamingAPIEnabled=true,Check if cassanda export is configured - exportConfig.smartExport.secondaryDsEnabled = false.Check Mongo:Users mdmgw, mdmhub and mdm_batch_service - permissions for the newly added database (readWrite),Indexes,Verify if correct start value is set for sequance COMPANYAddressIDSeq - collection sequenceCounters _id = COMPANYAddressIDSeq.Check MDMHUB API:Check mdm-manager API with apikey authentication by executing one of read operations: GET {{ manager_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP'). The request should execute properly (HTTP status code 200) and returns some HCP objects. The empty response is also possible in the case when there is no HCP data in Reltio,Run the same operation using oAuth2 authentication - remember that the manager url is different,Verify api-router API with apikey authentication using search operation: GET {{ api_router_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP'). The request should execute properly (HTTP status code 200) and returns some HCP objects. Empty response is also possible in the case when there is no HCP data in Reltio,Run the same operation using oAuth2 authentication - remember that the api router url is different,Check batch service API with apikey authentication by executing following operation GET {{ batch_service_url }}/batchController/NA/instances/NA. The request should return 403 HTTP Code and body:{    "code": "403",    "message": "Forbidden: com.COMPANY.mdm.security.AuthorizationException: Batch 'NA' is not allowed."}The request doesn't create any batch.Run the same operation using oAuth2 authentication - remember that the batch service url is different,Verify of component logs: mdm-manager, api-router and batch-service url. Focus on errors and kafka records - rebalancing, authorization problems, topic existence warnings etc.MDMHUB streaming services:Check logs of reltio-subscriber, entity-enricher, callback-service, event-publisher and mdm-reconciliation-service components. Verify if there is no errors and kafka warnings related with rebalancing, authorization problems, topic existence warnings etc,Verify if lookup refresh process is working properly - check existance of mongo collection LookupValues. It should have data,Airflow:Run DAGs: export_merges_from_reltio_to_s3_full_{{ env }}, hub_reconciliation_v2_{{ env }}, lookup_values_export_to_s3_{{ env }}, reconciliation_snowflake_{{ env }}.Wait for their finish and validate results.Snowflake:Check snowflake connector logs,Check if tables HUB_KAFKA_DATA, LOV_DATA, MERGE_TREE_DATA exist at LANDING schama and has data,Verify if mdm-hub-snowflake-dm package is deployed,What else?Monitoring:Check grafana dashboards:HUB Performance,Kafka Topics Overview,Host Statistics,JMX Overview,Kong,MongoDB.Check Kibana index patterns:{{env}}-internal-batch-efk-transactions*,{{env}}-internal-gw-efk-transactions*,{{env}}-internal-publisher-efk-transactions*,{{env}}-internal-subscriber-efk-transactions*,{{env}}-mdmhub,Check Kibana dashboards:{{env}} API calls,{{env}} Batch Instances,{{env}} Batch loads,{{env}} Error Logs Overview,{{env}} Error Logs RDM,{{env}} HUB Store{{env}} HUB events,{{env}} MDM Events,{{env}} Profile Updates,Check alerts - How?"
},
{
"title": "Configuration (apac k8s)",
"pageID": "228933487",
"pageLink": "/pages/viewpage.action?pageId=228933487",
"content": "Installation of new APAC non-prod cluster basing on AMER non-prod configuration.Copy mdm-hub-cluster-env/amer directory into mdm-hub-cluster-env/apac directory.Change dir names from "amer" to "apac".Replace everything in files in apac directory: "amer"→"apac".CertificatesGenerate private-keys, CSRs and request Kong certificate (kong/config_files/certs).\nanuskp@CF-341562$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout api-apac-nprod-gbl-mdm-hub.COMPANY.com.key -out api-apac-nprod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n..................+++++\n.........................+++++\nwriting new private key to 'api-apac-nprod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:api-apac-nprod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588584">●●●●●●●●●●●●</a>\nAn optional company name []:\nSAN:DNS Name=api-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=www.api-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=kibana-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=prometheus-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=grafana-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=elastic-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=consul-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=akhq-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=airflow-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=mongo-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=mdm-log-management-apac-nonprod.COMPANY.comDNS Name=gbl-mdm-hub-apac-nprod.COMPANY.comPlace private-key and signed certificate in kong/config_files/certs. Git-ignore them and encrypt them into .encrypt files.Generate private-keys, CSRs and request Kafka certificate (apac-backend/secret.yaml)\nanuskp@CF-341562$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout kafka-apac-nprod-gbl-mdm-hub.COMPANY.com.key -out kafka-apac-nprod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n................................................................+++++\n.......................................+++++\nwriting new private key to 'kafka-apac-nprod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:kafka-apac-nprod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588586">●●●●●●●●●●●●</a>\nAn optional company name []:\nSAN:DNS Name=kafka-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b1-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b2-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b3-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b4-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b5-apac-nprod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b6-apac-nprod-gbl-mdm-hub.COMPANY.comAfter receiving the certificate, encode it with base64 and paste into apac-backend/secrets.yaml:  -> secrets.mdm-kafka-external-listener-cert.listener.key  -> secrets.mdm-kafka-external-listener-cert.listener.crt  (*) Since this is a new environment, remove everything under "migration" key in apac-backend/values.yaml.Replace all user_passwords in apac/nprod/secrets.yaml. for each ●●●●●●●●●●●●●●●●● a new, 32-char one and globally replace it in all apac configs.Go through apac-dev/config_files one by one and adjust settings such as: Reltio, SQS etc.(*) Change Kafka topics and consumergroup names to fit naming standards. This is a one-time activity and does not need to be repeated if next environments will be built based on APAC config.Export amer-nprod CRDs into yaml file and import it in apac-nprod:\n$ kubectx atp-mdmhub-nprod-amer\n$ kubectl get crd -A -o yaml > ~/crd-definitions-amer.yaml\n$ kubectx atp-mdmhub-nprod-apac\n$ kubectl apply -f ~/crd-definitions-amer.yaml\nCreate config dirs for git2consul (mdm-hub-env-config):\n$ git checkout config/dev_amer\n$ git pull\n$ git branch config/dev_apac\n$ git checkout config/dev_apac\n$ git push origin config/dev_apac\nRepeat for qa and stage.Install operators:\n$ ./install.sh -l operators -r apac -c nprod -e apac-dev -v 3.9.4\nInstall backend:\n$ ./install.sh -l backend -r apac -c nprod -e apac-dev -v 3.9.4\nLog into mongodb (use port forward if there is no connection to kong: run "kubectl port-forward mongo-0 -n apac-backend 27017" and connect to mongo on localhost:27017). Run below script:\ndb.createCollection("entityHistory") \ndb.entityHistory.createIndex({country: -1}, {background: true, name: "idx_country"});\ndb.entityHistory.createIndex({sources: -1}, {background: true, name: "idx_sources"});\ndb.entityHistory.createIndex({entityType: -1}, {background: true, name: "idx_entityType"});\ndb.entityHistory.createIndex({status: -1}, {background: true, name: "idx_status"});\ndb.entityHistory.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\ndb.entityHistory.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\ndb.entityHistory.createIndex({"entity.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"});\ndb.entityHistory.createIndex({"entity.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});\ndb.entityHistory.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\ndb.entityHistory.createIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\ndb.entityHistory.createIndex({entityChecksum: -1}, {background: true, name: "idx_entityChecksum"});\ndb.entityHistory.createIndex({parentEntityId: -1}, {background: true, name: "idx_parentEntityId"});\n\ndb.entityHistory.createIndex({COMPANYGlobalCustomerID: -1}, {background: true, name: "idx_COMPANYGlobalCustomerID"});\n\ndb.createCollection("entityRelations")\ndb.entityRelations.createIndex({country: -1}, {background: true, name: "idx_country"});\ndb.entityRelations.createIndex({sources: -1}, {background: true, name: "idx_sources"});\ndb.entityRelations.createIndex({relationType: -1}, {background: true, name: "idx_relationType"});\ndb.entityRelations.createIndex({status: -1}, {background: true, name: "idx_status"});\ndb.entityRelations.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\ndb.entityRelations.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\ndb.entityRelations.createIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\ndb.entityRelations.createIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\ndb.entityRelations.createIndex({"relation.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"}); \ndb.entityRelations.createIndex({"relation.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); \ndb.entityRelations.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"}); \ndb.entityRelations.createIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\n \ndb.createCollection("LookupValues")\ndb.LookupValues.createIndex({updatedOn: 1}, {background: true, name: "idx_updatedOn"});\ndb.LookupValues.createIndex({countries: 1}, {background: true, name: "idx_countries"});\ndb.LookupValues.createIndex({mdmSource: 1}, {background: true, name: "idx_mdmSource"});\ndb.LookupValues.createIndex({type: 1}, {background: true, name: "idx_type"});\ndb.LookupValues.createIndex({code: 1}, {background: true, name: "idx_code"});\ndb.LookupValues.createIndex({valueUpdateDate: 1}, {background: true, name: "idx_valueUpdateDate"});\n\ndb.createCollection("ErrorLogs")\ndb.ErrorLogs.createIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\ndb.ErrorLogs.createIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\ndb.ErrorLogs.createIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\ndb.ErrorLogs.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\ndb.createCollection("batchEntityProcessStatus")\ndb.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1}, {background: true, name: "idx_findByBatchNameAndSourceId"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1}, {background: true, name: "idx_EntitiesUnseen_SoftDeleteJob"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResult_ProcessingJob"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResultAll_ProcessingJob"});\n\ndb.createCollection("batchInstance")\n\ndb.createCollection("relationCache")\ndb.relationCache.createIndex({startSourceId: -1}, {background: true, name: "idx_findByStartSourceId"});\n\ndb.createCollection("DCRRequests")\ndb.DCRRequests.createIndex({type: -1, "status.name": -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\ndb.DCRRequests.createIndex({entityURI: -1, "status.name": -1}, {background: true, name: "idx_entityURIStatusNameFind_SubmitVR"});\ndb.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n\ndb.createCollection("entityMatchesHistory")\ndb.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1}, {background: true, name: "idx_findAutoLinkMatch_CleanerStream"});\n\ndb.createCollection("DCRRegistry")\ndb.DCRRegistry.createIndex({"status.changeDate": -1}, {background: true, name: "idx_changeDate_FindDCRsBy"});\ndb.DCRRegistry.createIndex({extDCRRequestId: -1}, {background: true, name: "idx_extDCRRequestId_FindByExtId"});\ndb.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n\ndb.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\n\ndb.createCollection("sequenceCounters")\ndb.sequenceCounters.insertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong(7000000000)}) // NOTE: 7000000000 is APAC-specific\nLog into Kibana. Export dashboards/indices from AMER and import them in APAC.Install mdmhub:\n$ ./install.sh -l mdmhub -r apac -c nprod -e apac-dev -v 3.9.4\nTickets:DNS names ticket:Ticket queue: GBL-NETWORK DDITitle: Add domains to DNSDescription:Hi Team,\n\nPlease add below domains:\n\napi-apac-nprod-gbl-mdm-hub.COMPANY.com\nkibana-apac-nprod-gbl-mdm-hub.COMPANY.com\nprometheus-apac-nprod-gbl-mdm-hub.COMPANY.com\ngrafana-apac-nprod-gbl-mdm-hub.COMPANY.com\nelastic-apac-nprod-gbl-mdm-hub.COMPANY.com\nconsul-apac-nprod-gbl-mdm-hub.COMPANY.com\nakhq-apac-nprod-gbl-mdm-hub.COMPANY.com\nairflow-apac-nprod-gbl-mdm-hub.COMPANY.com\nmongo-apac-nprod-gbl-mdm-hub.COMPANY.com\nmdm-log-management-apac-nonprod.COMPANY.com\ngbl-mdm-hub-apac-nprod.COMPANY.com\n\nas CNAMEs of our ELB:\na81322116787943bf80a29940dbc2891-00e7418d9be731b0.elb.ap-southeast-1.amazonaws.comAlso, please add one CNAME for each one of below ELBs:\n\nCNAME: kafka-apac-nprod-gbl-mdm-hub.COMPANY.com\nELB: a7ba438d7068b4a799d29d3d408b0932-1e39235cdff6d511.elb.ap-southeast-1.amazonaws.com\n\nCNAME: kafka-b1-apac-nprod-gbl-mdm-hub.COMPANY.com\nELB: a72bbc64327cb4ee4b35ae5abeefbb26-4c392c106b29b6e5.elb.us-east-1.amazonaws.com\n\nCNAME: kafka-b2-apac-nprod-gbl-mdm-hub.COMPANY.com\nELB: a7fdb6117b2184096915aed31732110b-91c5ac7fb0968710.elb.us-east-1.amazonaws.com\n\nCNAME: kafka-b3-apac-nprod-gbl-mdm-hub.COMPANY.com\nELB: a99220323cc684bcaa5e29c198777e13-ddf5ddbf36fe3025.elb.us-east-1.amazonaws.comBest Regards,PiotrMDM HubFirewall whitelistingTicket queue: GBL-NETWORK ECSTitle: Firewall exceptions for new BoldMoves PDKS clusterDescription:Hi Team,\n\nPlease open all traffic listed in attached Excel sheet.\nIn case this is not the queue where I should request Firewall changes, kindly point me in the right direction.\n\nBest Regards,\nPiotr\nMDM HubAttached excel:SourceSource IPDestinationDestination IPPortMDM Hub monitoring (euw1z1pl046.COMPANY.com)CI/CD server (sonar-gbicomcloud.COMPANY.com)10.90.98.0/24pdcs-apa1p.COMPANY.com-443MDM Hub monitoring (euw1z1pl046.COMPANY.com)CI/CD server (sonar-gbicomcloud.COMPANY.com)EMEA NPROD MDM Hub10.90.98.0/24APAC NPROD - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●4439094Global NPROD MDM Hub10.90.96.0/24APAC NPROD - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●443APAC NPROD - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●Global NPROD MDM Hub10.90.96.0/248443APAC NPROD - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●EMEA NPROD MDM Hub10.90.98.0/248443Integration tests:In mdm-hub-env-config prepare inventory/kube_dev_apac (copy kube_dev_amer and adjust variables)run "prepare_int_tests" playbook:\n$ ansible-playbook prepare_int_tests.yml -i inventory/kube_dev_apac/inventory -e src_dir="/mnt/c/Users/panu/gitrep/mdm-hub-inbound-services-all"\nin mdm-hub-inbound-services confirm test resources (citrus properties) for mdm-integration-tests have been replaced and run two Gradle tasks:-mdm-gateway/mdm-interation-tests/Tasks/verification/commonIntegrationTests-mdm-gateway/mdm-interation-tests/Tasks/verification/integrationTestsForCOMPANYModel"
},
{
"title": "Configuration (apac prod k8s)",
"pageID": "234699630",
"pageLink": "/pages/viewpage.action?pageId=234699630",
"content": "Installation of new APAC prod cluster basing on AMER prod configuration.Copy mdm-hub-cluster-env/amer/prod directory into mdm-hub-cluster-env/apac directory.Change dir names from "amer" to "apac" - apac-backend, apac-prodReplace everything in files in apac directory: "amer"→"apac".CertificatesGenerate private-keys, CSRs and request Kong certificate (kong/config_files/certs).\nanuskp@CF-341562$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout api-apac-prod-gbl-mdm-hub.COMPANY.com.key -out api-apac-prod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n..................+++++\n.........................+++++\nwriting new private key to 'api-apac-prod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:api-apac-prod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588665">●●●●●●●●●●●●</a>\nAn optional company name []:\nSAN:DNS Name=api-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=www.api-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=kibana-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=prometheus-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=grafana-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=elastic-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=consul-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=akhq-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=airflow-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=mongo-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=mdm-log-management-apac-noprod.COMPANY.comDNS Name=gbl-mdm-hub-apac-prod.COMPANY.comPlace private-key and signed certificate in kong/config_files/certs. Git-ignore them and encrypt them into .encrypt files.Generate private-keys, CSRs and request Kafka certificate (apac-backend/secret.yaml)\nanuskp@CF-341562$ openssl req -nodes -newkey rsa:2048 -sha256 -keyout kafka-apac-prod-gbl-mdm-hub.COMPANY.com.key -out kafka-apac-prod-gbl-mdm-hub.COMPANY.com.csr\nGenerating a RSA private key\n................................................................+++++\n.......................................+++++\nwriting new private key to 'kafka-apac-prod-gbl-mdm-hub.COMPANY.com.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY Incorporated\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:kafka-apac-prod-gbl-mdm-hub.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588666">●●●●●●●●●●●●</a>\nAn optional company name []:\nSAN:DNS Name=kafka-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b1-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b2-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b3-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b4-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b5-apac-prod-gbl-mdm-hub.COMPANY.comDNS Name=kafka-b6-apac-prod-gbl-mdm-hub.COMPANY.comAfter receiving the certificate, encode it with base64 and paste into apac-backend/secrets.yaml:  -> secrets.mdm-kafka-external-listener-cert.listener.key  -> secrets.mdm-kafka-external-listener-cert.listener.crt Raise a ticket via Request Manager (*) Since this is a new environment, remove everything under "migration" key in apac-backend/values.yaml.Replace all user_passwords in apac/prod/secrets.yaml. for each ●●●●●●●●●●●●●●●●● a new, 40-char one and globally replace it in all apac configs.Go through apac-dev/config_files one by one and adjust settings such as: Reltio, SQS etc.(*) Change Kafka topics and consumergroup names to fit naming standards. This is a one-time activity and does not need to be repeated if next environments will be built based on APAC config.Export amer-prod CRDs into yaml file and import it in apac-prod:\n$ kubectx atp-mdmhub-prod-amer\n$ kubectl get crd -A -o yaml > ~/crd-definitions-amer.yaml\n$ kubectx atp-mdmhub-prod-apac\n$ kubectl apply -f ~/crd-definitions-amer.yaml\nCreate config dirs for git2consul (mdm-hub-env-config):\n$ git checkout config/dev_amer\n$ git pull\n$ git branch config/dev_apac\n$ git checkout config/dev_apac\n$ git push origin config/dev_apac\nRepeat for qa and stage.Install operators:\n$ ./install.sh -l operators -r apac -c prod -e apac-dev -v 3.9.4\nInstall backend:\n$ ./install.sh -l backend -r apac -c prod -e apac-dev -v 3.9.4\n1 Log into mongodb (use port forward if there is no connection to kong: run "kubectl port-forward mongo-0 -n apac-backend 27017" and connect to mongo on localhost:27017) orretrieve ip address from ELB of kong service and add it to Windows hosts file as DNS name (example. ●●●●●●●●●●●● mongo-amer-prod-gbl-mdm-hub.COMPANY.com) and connect to mongo on mongo-amer-prod-gbl-mdm-hub.COMPANY.com:270172 Run below script:\ndb.createCollection("entityHistory") \ndb.entityHistory.createIndex({country: -1}, {background: true, name: "idx_country"});\ndb.entityHistory.createIndex({sources: -1}, {background: true, name: "idx_sources"});\ndb.entityHistory.createIndex({entityType: -1}, {background: true, name: "idx_entityType"});\ndb.entityHistory.createIndex({status: -1}, {background: true, name: "idx_status"});\ndb.entityHistory.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\ndb.entityHistory.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\ndb.entityHistory.createIndex({"entity.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"});\ndb.entityHistory.createIndex({"entity.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});\ndb.entityHistory.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\ndb.entityHistory.createIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\ndb.entityHistory.createIndex({entityChecksum: -1}, {background: true, name: "idx_entityChecksum"});\ndb.entityHistory.createIndex({parentEntityId: -1}, {background: true, name: "idx_parentEntityId"});\n\ndb.entityHistory.createIndex({COMPANYGlobalCustomerID: -1}, {background: true, name: "idx_COMPANYGlobalCustomerID"});\n\ndb.createCollection("entityRelations")\ndb.entityRelations.createIndex({country: -1}, {background: true, name: "idx_country"});\ndb.entityRelations.createIndex({sources: -1}, {background: true, name: "idx_sources"});\ndb.entityRelations.createIndex({relationType: -1}, {background: true, name: "idx_relationType"});\ndb.entityRelations.createIndex({status: -1}, {background: true, name: "idx_status"});\ndb.entityRelations.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\ndb.entityRelations.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\ndb.entityRelations.createIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\ndb.entityRelations.createIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\ndb.entityRelations.createIndex({"relation.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"}); \ndb.entityRelations.createIndex({"relation.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); \ndb.entityRelations.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"}); \ndb.entityRelations.createIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\n \ndb.createCollection("LookupValues")\ndb.LookupValues.createIndex({updatedOn: 1}, {background: true, name: "idx_updatedOn"});\ndb.LookupValues.createIndex({countries: 1}, {background: true, name: "idx_countries"});\ndb.LookupValues.createIndex({mdmSource: 1}, {background: true, name: "idx_mdmSource"});\ndb.LookupValues.createIndex({type: 1}, {background: true, name: "idx_type"});\ndb.LookupValues.createIndex({code: 1}, {background: true, name: "idx_code"});\ndb.LookupValues.createIndex({valueUpdateDate: 1}, {background: true, name: "idx_valueUpdateDate"});\n\ndb.createCollection("ErrorLogs")\ndb.ErrorLogs.createIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\ndb.ErrorLogs.createIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\ndb.ErrorLogs.createIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\ndb.ErrorLogs.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\ndb.createCollection("batchEntityProcessStatus")\ndb.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1}, {background: true, name: "idx_findByBatchNameAndSourceId"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1}, {background: true, name: "idx_EntitiesUnseen_SoftDeleteJob"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResult_ProcessingJob"});\ndb.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResultAll_ProcessingJob"});\n\ndb.createCollection("batchInstance")\n\ndb.createCollection("relationCache")\ndb.relationCache.createIndex({startSourceId: -1}, {background: true, name: "idx_findByStartSourceId"});\n\ndb.createCollection("DCRRequests")\ndb.DCRRequests.createIndex({type: -1, "status.name": -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\ndb.DCRRequests.createIndex({entityURI: -1, "status.name": -1}, {background: true, name: "idx_entityURIStatusNameFind_SubmitVR"});\ndb.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n\ndb.createCollection("entityMatchesHistory")\ndb.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1}, {background: true, name: "idx_findAutoLinkMatch_CleanerStream"});\n\ndb.createCollection("DCRRegistry")\ndb.DCRRegistry.createIndex({"status.changeDate": -1}, {background: true, name: "idx_changeDate_FindDCRsBy"});\ndb.DCRRegistry.createIndex({extDCRRequestId: -1}, {background: true, name: "idx_extDCRRequestId_FindByExtId"});\ndb.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n\ndb.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\n\ndb.createCollection("sequenceCounters")\ndb.sequenceCounters.insertOne({_id: "COMPANYAddressIDSeq", sequence: NumberLong(7000000000)}) // NOTE: 7000000000 is APAC-specific\nRegionSeq start numberamer6000000000apac7000000000emea5000000000Log into Kibana. Export dashboards/indices from AMER and import them in APAC.Use the following playbook:- change values in  ansible repository:inventory/jenkins/group_vars/all/all.yml → #CHNG- run playbook:  ansible-playbook install_kibana_objects.yml -i inventory/jenkins/inventory --vault-password-file=../vault -vInstall mdmhub:\n$ ./install.sh -l mdmhub -r apac -c prod -e apac-dev -v 3.9.4\nTickets:DNS names ticket:Ticket queue: GBL-NETWORK DDITitle: Add domains to DNSDescription:Hi Team,Please add below domains:api-apac-prod-gbl-mdm-hub.COMPANY.comkibana-apac-prod-gbl-mdm-hub.COMPANY.comprometheus-apac-prod-gbl-mdm-hub.COMPANY.comgrafana-apac-prod-gbl-mdm-hub.COMPANY.comelastic-apac-prod-gbl-mdm-hub.COMPANY.comconsul-apac-prod-gbl-mdm-hub.COMPANY.comakhq-apac-prod-gbl-mdm-hub.COMPANY.comairflow-apac-prod-gbl-mdm-hub.COMPANY.commongo-apac-prod-gbl-mdm-hub.COMPANY.commdm-log-management-apac-noprod.COMPANY.comgbl-mdm-hub-apac-prod.COMPANY.comas CNAMEs of our ELB:a2349e1a042d14c0691f14cf0db75910-14dc3724296a3d4e.elb.ap-southeast-1.amazonaws.comAlso, please add one CNAME for each one of below ELBs:CNAME: kafka-apac-prod-gbl-mdm-hub.COMPANY.comELB: a40444d2dc7b243b08b40e702105979e-28d24a897d699626.elb.ap-southeast-1.amazonaws.comCNAME: kafka-b1-apac-prod-gbl-mdm-hub.COMPANY.comELB: adadc7f02bf9a4ac585f4fba6870d0ae-be80c1c734ef18a3.elb.ap-southeast-1.amazonaws.comCNAME: kafka-b2-apac-prod-gbl-mdm-hub.COMPANY.comELB: a6c81c4fcba6c42f884c2511b5c5183d-d80b70b1ac791ce9.elb.ap-southeast-1.amazonaws.comCNAME: kafka-b3-apac-prod-gbl-mdm-hub.COMPANY.comELB: a8b88854568314cb5b01a9073e1f1515-0b589be04ea6a31b.elb.ap-southeast-1.amazonaws.comBest Regards,Kacper UrbanskiMDMHUBGBL-NETWORK DDIFirewall whitelistingTicket queue: GBL-NETWORK ECSTitle: Firewall exceptions for new BoldMoves PDKS clusterDescription:Hi Team,\n\nPlease open all traffic listed in attached Excel sheet.\nIn case this is not the queue where I should request Firewall changes, kindly point me in the right direction.\n\nBest Regards,\nPiotr\nMDM HubAttached excel:SourceSource IPDestinationDestination IPPortMDM Hub monitoring (euw1z1pl046.COMPANY.com)CI/CD server (sonar-gbicomcloud.COMPANY.com)10.90.98.0/24pdcs-apa1p.COMPANY.com-443MDM Hub monitoring (euw1z1pl046.COMPANY.com)CI/CD server (sonar-gbicomcloud.COMPANY.com)EMEA prod MDM Hub10.90.98.0/24APAC prod - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●4439094Global prod MDM Hub10.90.96.0/24APAC prod - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●443APAC prod - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●Global prod MDM Hub10.90.96.0/248443APAC prod - PDKS cluster●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●EMEA prod MDM Hub10.90.98.0/248443Integration tests:In mdm-hub-env-config prepare inventory/kube_dev_apac (copy kube_dev_amer and adjust variables)run "prepare_int_tests" playbook:\n$ ansible-playbook prepare_int_tests.yml -i inventory/kube_dev_apac/inventory -e src_dir="/mnt/c/Users/panu/gitrep/mdm-hub-inbound-services-all"\nin mdm-hub-inbound-services confirm test resources (citrus properties) for mdm-integration-tests have been replaced and run two Gradle tasks:-mdm-gateway/mdm-interation-tests/Tasks/verification/commonIntegrationTests-mdm-gateway/mdm-interation-tests/Tasks/verification/integrationTestsForCOMPANYModel"
},
{
"title": "Configuration (emea)",
"pageID": "218444982",
"pageLink": "/pages/viewpage.action?pageId=218444982",
"content": "Setup Mongo Indexes and Collections:EntityHistorydb.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});DCR Service 2 Indexes:DCR Service 2 Indexes\ndb.DCRRegistryONEKEY.createIndex({type: -1, "status.name": -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\n\ndb.DCRRegistry.createIndex({"status.changeDate": -1}, {background: true, name: "idx_changeDate_FindDCRsBy"});\ndb.DCRRegistry.createIndex({extDCRRequestId: -1}, {background: true, name: "idx_extDCRRequestId_FindByExtId"});\ndb.DCRRegistry.createIndex({changeRequestURI: -1, "status.name": -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n"
},
{
"title": "Configuration (gblus prod)",
"pageID": "164470081",
"pageLink": "/pages/viewpage.action?pageId=164470081",
"content": "Config file: gblmdm-hub-us-spec_v05.xlsxAWS ResourcesResource NameResource TypeSpecificationAWS RegionAWS Availability ZoneDependen onDescriptionComponentsHUBGWInterfaceGBL MDM US HUB Prod Data Svr1 - amraelp00007844EC2r5.2xlargeus-east-1bEBS APP DATA MDM PROD SVR1EBS DOCKER DATA MDM PROD SVR1- Mongo - data redundancy and high availability   primary, secondary, tertiary needs to be hosted on a separated server and zones - high availability if one zone is offline- Disks:     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 750GB - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4mongoEFK-DATAGBL MDM US HUB Prod Data Svr2 - amraelp00007870EC2r5.2xlargeus-east-1eEBS APP DATA MDM PROD SVR2EBS DOCKER DATA MDM PROD SVR2- Mongo - data redundancy and high availability   primary, secondary, tertiary needs to be hosted on a separated server and zones - high availability if one zone is offline- Disks:     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 750GB - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4mongoEFK-DATAGBL MDM US HUB Prod Data Svr3 - amraelp00007847EC2r5.2xlargeus-east-1bEBS APP DATA MDM PROD SVR3EBS DOCKER DATA MDM PROD SVR3- Mongo - data redundancy and high availability   primary, secondary, tertiary needs to be hosted on a separated server and zones - high availability if one zone is offline- Disks:     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 750GB - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4mongoEFK-DATAGBL MDM US HUB Prod Svc Svr1 - amraelp00007848EC2r5.2xlargeus-east-1bEBS APP SVC MDM PROD SVR1EBS DOCKER SVC MDM PROD SVR1- Kafka and zookeeper - Kong and Cassandra    Cassandra replication factory set to 3 Kong proxy high availability     Load balancer for Kong API- Disks:     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 450GB - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4KafkaZookeeperKongCassandraHUBGWinboundoutboundGBL MDM US HUB Prod Svc Svr2 - amraelp00007849EC2r5.2xlargeus-east-1bEBS APP SVC MDM PROD SVR2EBS DOCKER SVC MDM PROD SVR2- Kafka and zookeeper - Kong and Cassandra    Cassandra replication factory set to 3 Kong proxy high availability     Load balancer for Kong API- Disks:     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 450GB - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4KafkaZookeeperKongCassandraHUBGWinboundoutboundGBL MDM US HUB Prod Svc Svr3 - amraelp00007871EC2r5.2xlargeus-east-1eEBS APP SVC MDM PROD SVR3EBS DOCKER SVC MDM PROD SVR3- Kafka and zookeeper - Kong and Cassandra    Cassandra replication factory set to 3 Kong proxy high availability     Load balancer for Kong API- Disks:     Mount 50G - /var/lib/docker/ - docker installation directory    Mount 450GB - /app/ - docker applications local storage OS: Red Hat Enterprise Linux Server release 7.4KafkaZookeeperKongCassandraHUBGWinboundoutboundEBS APP DATA MDM Prod Svr1EBS750 GB XFSus-east-1bmount to /app on GBL MDM US HUB Prod Data Svr1 - amraelp00007844EBS APP DATA MDM Prod Svr2EBS750 GB XFSus-east-1emount to /app on GBL MDM US HUB Prod Data Svr2 - amraelp00007870EBS APP DATA MDM Prod Svr3EBS750 GB XFSus-east-1bmount to /app on GBL MDM US HUB Prod Data Svr3 - amraelp00007847EBS DOCKER DATA MDM Prod Svr1EBS50 GB XFSus-east-1bmount to docker devicemapper on GBL MDM US HUB Prod Data Svr1 - amraelp00007844EBS DOCKER DATA MDM Prod Svr2EBS50 GB XFSus-east-1emount to docker devicemapper on GBL MDM US HUB Prod Data Svr2 - amraelp00007870EBS DOCKER DATA MDM Prod Svr3EBS50 GB XFSus-east-1bmount to docker devicemapper on GBL MDM US HUB Prod Data Svr3 - amraelp00007847EBS APP SVC MDM Prod Svr1EBS450 GB XFSus-east-1bmount to /app on GBL MDM US HUB Prod Svc Svr1 - amraelp00007848EBS APP SVC MDM Prod Svr2EBS450 GB XFSus-east-1bmount to /app on GBL MDM US HUB Prod Svc Svr2 - amraelp00007849EBS APP SVC MDM Prod Svr3EBS450 GB XFSus-east-1emount to /app on GBL MDM US HUB Prod Svc Svr3 - amraelp00007871EBS DOCKER SVC MDM Prod Svr1EBS50 GB XFSus-east-1bmount to docker devicemapper on GBL MDM US HUB Prod Svc Svr1 - amraelp00007848EBS DOCKER SVC MDM Prod Svr2EBS50 GB XFSus-east-1bmount to docker devicemapper on GBL MDM US HUB Prod Svc Svr2 - amraelp00007849EBS DOCKER SVC MDM Prod Svr3EBS50 GB XFSus-east-1emount to docker devicemapper on GBL MDM US HUB Prod Svc Svr3 - amraelp00007871GBLMDMHUB US S3 Bucketgblmdmhubprodamrasp101478S3us-east-1Load BalancerELBELBGBL MDM US HUB Prod Svc Svr1GBL MDM US HUB Prod Svc Svr2GBL MDM US HUB Prod Svc Svr3MAP 443 - 8443 (only HTTPS) - ssl offloading on KONGDomain: gbl-mdm-hub-us-prod.COMPANY.comNAME:  PFE-CLB-ATP-MDMHUB-US-PROD-001DNS Name : internal-PFE-CLB-ATP-MDMHUB-US-PROD-001-146249044.us-east-1.elb.amazonaws.comSSL cert for doiman domain gbl-mdm-hub-us-prod.COMPANY.comCertificateDomain : domain gbl-mdm-hub-us-prod.COMPANY.comDNS RecordDNSAddress: gbl-mdm-hub-us-prod.COMPANY.com -> Load BalancerRolesNameTypePrivilegesMember ofDescriptionReqeusts IDProvided accessUNIX-universal-awscbsdev-mdmhub-us-prod-computers-UUnix Computer ROLEAccess to hosts:GBL MDM US HUB Prod Data Svr1GBL MDM US HUB Prod Data Svr2GBL MDM US HUB Prod Data Svr3GBL MDM US HUB Prod Svc Svr1GBL MDM US HUB Prod Svc Svr2GBL MDM US HUB Prod Svc Svr3Computer role including all MDM servers-UNIX-GBLMDMHUB-US-PROD-ADMINUser Role- dzdo root - access to docker- access to docker-engine (systemctl) restart, stop, start docker engineUNIX-GBLMDMHUB-US-PROD-U  Admin role to manage all resource on servers-KUCR - 20200519090759337WARECP - 20200519083956229GENDEL - 20200519094636480MORAWM03 - 20200519084328245PIASEM - 20200519095309490UNIX-GBLMDMHUB-US-PROD-HUBROLEUser Role- Read only for logs- dzdo docker ps * - list docker container- dzdo docker logs * - check docker container logs- Read access to /app/* - check  docker container logsUNIX-GBLMDMHUB-US-PROD-U  role without root access, read only for logs and check docker status. It will be used by monitoring-UNIX-GBLMDMHUB-US-PROD-SEROLEUser Role- dzdo docker * UNIX-GBLMDMHUB-US-PROD-U  service role - it will be used to run microservices  from Jenkins CD pipeline-Service Account - GBL32452299imdmuspr mdmhubuspr - 20200519095543524UNIX-GBLMDMHUB-US-PROD-UUser Role- Read only for logs- Read access to /app/* - check  docker container logsUNIX-GBLMDMHUB-US-PROD-U  -Ports - Security Group PFE-SG-GBLMDMHUB-US-APP-PROD-001 Port ApplicationWhitelisted8443Kong (API proxy)ALL from COMPANY VPN7000Cassandra (Kong DB)  - inter-node communicationALL from COMPANY VPN7001Cassandra (Kong DB) - inter-node communicationALL from COMPANY VPN9042Cassandra (Kong DB)  - client portALL from COMPANY VPN9094Kafka - SASL_SSL protocolALL from COMPANY VPN9093Kafka - SSL protocolALL from COMPANY VPN9092KAFKA  - Inter-broker communication   ALL from COMPANY VPN2181ZookeeperALL from COMPANY VPN2888Zookeeper - intercommunicationALL from COMPANY VPN3888Zookeeper - intercommunicationALL from COMPANY VPN27017MongoALL from COMPANY VPN9999HawtIO - administration consoleALL from COMPANY VPN9200ElasticsearchALL from COMPANY VPN9300Elasticsearch TCP - cluster communication portALL from COMPANY VPN5601KibanaALL from COMPANY VPN9100 - 9125Prometheus exportersALL from COMPANY VPN9542Kong exporterALL from COMPANY VPN2376Docker encrypted communication with the daemonALL from COMPANY VPNDocumentationService Account ( Jenkins / server access )http://btondemand.COMPANY.com/solution/160303162657677NSA - UNIX - user access to Servers:http://btondemand.COMPANY.com/solution/131014104610578InstructionsHow to add user access to UNIX-GBLMDMHUB-US-PROD-ADMINlog in to http://btondemand.COMPANY.com/search NSA - UNIXuser access to Servers - http://btondemand.COMPANY.com/solution/131014104610578go to Request Manager -> Request Catalog Search NSAChoose NSA-UNIX NSA Requests for Unix.ContinueFill Formula Add user access details formualAccount Type-NSA-UNIXName-Morawski, MikolajAD Username-MORAWM03User Domain-EMEARequestID-20200310100151888Request Details BelowRoleName: YesDescription:requestorCommentsList:Hi Team,I created the request to add account (EMEA/MORAWM03) to the ADMIN role on the following servers:amraelp00007844amraelp00007870amraelp00007847amraelp00007848amraelp00007849amraelp00007871Role name: UNIX-GBLMDMHUB-US-PROD-ADMIN-U -> member of: UNIX-universal-awscbsdev-mdmhub-us-prod-computers-U (UNIX-GBLMDMHUB-US-PROD-U)Could you please verify if I provided all required information?Regards,MikolajaccessToSpecificServerList_roleLst_2: NobusinessJustificationList:MDM HUB Team access toGBL MDM US HUB Prod Data Svr1 - amraelp00007844GBL MDM US HUB Prod Data Svr2 - amraelp00007870GBL MDM US HUB Prod Data Svr3 - amraelp00007847GBL MDM US HUB Prod Svc Svr1 - amraelp00007848GBL MDM US HUB Prod Svc Svr2 - amraelp00007849GBL MDM US HUB Prod Svc Svr3 - amraelp00007871regarding Fletcher projectserverLocationList: Not ApplicablenisDomainOtherList: OtherroleGroupAccount_roleLst_6: Add to Role Group(s)roleGroupNameList: UNIX-GBLMDMHUB-US-PROD-ADMIN-UaccountPrivilegeList_roleLst_7: Add PrivilegesaccountList_roleLst_8: UNIX group membershipunixGroupNameList: UNIX-GBLMDMHUB-US-PROD-ADMIN-USubmit requestHow to add/create new Service Account with access to UNIX-GBLMDMHUB-US-PROD-SEROLEService Account NameUNIX group namedetailsBTOnDemandLessons Learned mdmusprmdmhubusprService Account Name has to contain max 8 charactersGBL32452299iRE Requires Additional Information (GBL32099918i).msglog in to http://btondemand.COMPANY.com/search NSA - UNIXuser access to Servers - http://btondemand.COMPANY.com/solution/131014104610578go to Request Manager -> Request Catalog Search NSAChoose NSA-UNIX NSA Requests for Unix.ContinueFill FormulaNo -> LegacyYesExistingLegacyamraelp00007844amraelp00007870amraelp00007847amraelp00007848amraelp00007849amraelp00007871N/AOtherTo manage the Service account and Software for the MDM HUBIt will be used to run microservices from Jenkins CD pipelinePrimary: VARGAA08Secondary: TIRUMS05Service AccountService Account Name: UNIX group namePROD:mdmuspr mdmhubuspr - Service Account Name have to contain 8 charactersMDM HUB Service Account access (related to Docker microservices and Jenkins CD) forGBL MDM US HUB Prod Data Svr1 - amraelp00007844GBL MDM US HUB Prod Data Svr2 - amraelp00007870GBL MDM US HUB Prod Data Svr3 - amraelp00007847GBL MDM US HUB Prod Svc Svr1 - amraelp00007848GBL MDM US HUB Prod Svc Svr2 - amraelp00007849GBL MDM US HUB Prod Svc Svr3 - amraelp00007871regarding Fletcher projectHi Team,I am trying to create the request to create the Service Account for the following two servers. amraelp00007844amraelp00007870amraelp00007847amraelp00007848amraelp00007849amraelp00007871I want to provide the privileges for this Service Account:Role name: UNIX-GBLMDMHUB-US-PROD-SEROLE-U -> member of: UNIX-GBLMDMHUB-US-PROD-U  -> UNIX-universal-awscbsdev-mdmhub-us-prod-computers-U- docker * - folder access read/writeComputer role related: UNIX-universal-awscbsdev-mdmhub-us-prod-computers-UCould you please verify if I provided all the required information and this Request is correct?Regards,MikolajHome DIR: /app/mdmusprHow to open ports / create new Security Group - PFE-SG-GBLMDMHUB-US-APP-PROD-001http://btondemand.COMPANY.com/solution/120906165824277To create a new security group:Create server Security Group and Open Ports on  SC queue Name: GBL-BTI-IOD AWS FULL SUPPORTlog in to http://btondemand.COMPANY.com/ go to Get Support Search for queue: GBL-BTI-IOD AWS FULL SUPPORTSubmit Request to this queue:RequestHi Team,Could you please create a new security group and assign it with these servers.GBL MDM US HUB Prod Data Svr1 - amraelp00007844.COMPANY.comGBL MDM US HUB Prod Data Svr2 - amraelp00007870.COMPANY.comGBL MDM US HUB Prod Data Svr3 - amraelp00007847.COMPANY.comGBL MDM US HUB Prod Svc Svr1 - amraelp00007848.COMPANY.comGBL MDM US HUB Prod Svc Svr2 - amraelp00007849.COMPANY.comGBL MDM US HUB Prod Svc Svr3 - amraelp00007871.COMPANY.comPlease add the following owners:Primary: VARGAA08Secondary: TIRUMS05(please let me know if approval is required)New Security group Requested: PFE-SG-GBLMDMHUB-US-APP-PROD-001Please Open the following ports:Port Application Whitlisted8443 Kong (API proxy) ALL from COMPANY VPN7000 Cassandra (Kong DB) - inter-node communication ALL from COMPANY VPN7001 Cassandra (Kong DB) - inter-node communication ALL from COMPANY VPN9042 Cassandra (Kong DB) - client port ALL from COMPANY VPN9094 Kafka - SASL_SSL protocol ALL from COMPANY VPN9093 Kafka - SSL protocol ALL from COMPANY VPN9092 KAFKA - Inter-broker communication ALL from COMPANY VPN2181 Zookeeper ALL from COMPANY VPN2888 Zookeeper - intercommunication ALL from COMPANY VPN3888 Zookeeper - intercommunication ALL from COMPANY VPN27017 Mongo ALL from COMPANY VPN9999 HawtIO - administration console ALL from COMPANY VPN9200 Elasticsearch ALL from COMPANY VPN9300 Elasticsearch TCP - cluster communication port ALL from COMPANY VPN5601 Kibana ALL from COMPANY VPN9100 - 9125 Prometheus exporters ALL from COMPANY VPN9542 Kong exporter ALL from COMPANY VPN2376 Docker encrypted communication with the daemon ALL from COMPANY VPNApply this group to the following servers:amraelp00007844amraelp00007870amraelp00007847amraelp00007848amraelp00007849amraelp00007871Regards,MikolajThis will create a new Security Grouphttp://btondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=GBL32141041iThen these security groups have to be assigned to servers through the IOD portal by the Servers Owner.To open new ports:log in to http://btondemand.COMPANY.com/ go to Get Support Search for queue: GBL-BTI-IOD AWS FULL SUPPORTSubmit Request to this queue:RequestHi,Could you please modify the below security group and open the following port.PROD security group:Security group: PFE-SG-GBLMDMHUB-US-APP-PROD-001Port: 2376(this port is related to Docker for encrypted communication with the daemon)The host related to this:amraelp00007844amraelp00007870amraelp00007847amraelp00007848amraelp00007849amraelp00007871Regards,MikolajCertificates ConfigurationKafka GO TO:How to Generate JKS Keystore and Truststorekeytool -genkeypair -alias kafka.gbl-mdm-hub-us-prod.COMPANY.com -keyalg RSA -keysize 2048 -keystore server.keystore.jks -dname "CN=kafka.gbl-mdm-hub-us-prod.COMPANY.com, O=COMPANY, L=mdm_gbl_us_hub, C=US"keytool -certreq -alias kafka.gbl-mdm-hub-us-prod.COMPANY.com -file kafka.gbl-mdm-hub-us-prod.COMPANY.com.csr -keystore server.keystore.jksSAN:gbl-mdm-hub-us-prod.COMPANY.comamraelp00007848.COMPANY.com●●●●●●●●●●●●●●amraelp00007849.COMPANY.com●●●●●●●●●●●●●amraelp00007871.COMPANY.com●●●●●●●●●●●●●●Crete guest_user for KAFKA - "CN=kafka.guest_user.gbl-mdm-hub-us-prod.COMPANY.com, O=COMPANY, L=GBLMDMHUB-US-PROD-KAFKA, C=US":GO TO: How to Generate JKS Keystore and Truststorekeytool -genkeypair -alias guest_user -keyalg RSA -keysize 2048 -keystore guest_user.keystore.jks -dname "CN=kafka.guest_user.gbl-mdm-hub-us-prod.COMPANY.com, O=COMPANY, L=GBLMDMHUB-US-PROD-KAFKA, C=US"keytool -certreq -alias guest_user -file kafka.guest_user.gbl-mdm-hub-us-prod.COMPANY.com.csr -keystore guest_user.keystore.jksKongopenssl req -nodes -newkey rsa:2048 -sha256 -keyout gbl-mdm-hub-us-prod.key -out gbl-mdm-hub-us-prod.csrSubject Alternative Namesgbl-mdm-hub-us-prod.COMPANY.comamraelp00007848.COMPANY.com●●●●●●●●●●●●●●amraelp00007849.COMPANY.com●●●●●●●●●●●●●amraelp00007871.COMPANY.com●●●●●●●●●●●●●●EFKPROD_GBL_USopenssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-log-management-gbl-us-prod.key -out mdm-log-management-gbl-us-prod.csr mdm-log-management-gbl-us-prod.COMPANY.comSubject Alternative Names mdm-log-management-gbl-us-prod.COMPANY.comgbl-mdm-hub-us-prod.COMPANY.comamraelp00007844.COMPANY.com●●●●●●●●●●●●●●amraelp00007870.COMPANY.com●●●●●●●●●●●●●●amraelp00007847.COMPANY.com●●●●●●●●●●●●●esnode1openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-esnode1-gbl-us-prod.key -out mdm-esnode1-gbl-us-prod.csr mdm-esnode1-gbl-us-prod.COMPANY.com - Elasticsearch esnode1Subject Alternative Names mdm-esnode1-gbl-us-prod.COMPANY.comgbl-mdm-hub-us-prod.COMPANY.comamraelp00007844.COMPANY.com●●●●●●●●●●●●●●esnode2openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-esnode2-gbl-us-prod.key -out mdm-esnode2-gbl-us-prod.csr mdm-esnode2-gbl-us-prod.COMPANY.com - Elasticsearch esnode2Subject Alternative Names mdm-esnode2-gbl-us-prod.COMPANY.comgbl-mdm-hub-us-prod.COMPANY.comamraelp00007870.COMPANY.com●●●●●●●●●●●●●●esnode3openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-esnode3-gbl-us-prod.key -out mdm-esnode3-gbl-us-prod.csr mdm-esnode3-gbl-us-prod.COMPANY.com - Elasticsearch esnode3Subject Alternative Names mdm-esnode3-gbl-us-prod.COMPANY.comgbl-mdm-hub-us-prod.COMPANY.comamraelp00007847.COMPANY.com●●●●●●●●●●●●●Domain Configuration:Example request: GBL30514754i "Register domains "mdm-log-management*"log in to http://btondemand.COMPANY.com/getsupportWhat can we help you with? - Search for "Network Team Ticket"Select the most relevant topic - "DNS Request"Submit a ticket to this queue.Ticket Details: - GBL32508266iRequestHi,Could you please register the following domains:ADD the below DNS entry:========================mdm-log-management-gbl-us-prod.COMPANY.com              Alias Record to                             amraelp00007847.COMPANY.com[●●●●●●●●●●●●●]Kind regards,MikolajRequest DNSHi,Could you please register the following domains:ADD the below DNS entry for the ELB: PFE-CLB-ATP-MDMHUB-US-PROD-001:========================gbl-mdm-hub-us-prod.COMPANY.com              Alias Record to                             DNS Name : internal-PFE-CLB-ATP-MDMHUB-US-PROD-001-146249044.us-east-1.elb.amazonaws.comReferenced ELB creation ticket: GBL32561307iKind regards,MikolajEnvironment InstallationDISC:server1 amraelp00007844    APP DISC: nvme1n1   DOCKER DISC: nvme2n1server2 amraelp00007870   APP DISC: nvme2n1   DOCKER DISC: nvme1n1server3 amraelp00007847   APP DISC: nvme2n1   DOCKER DISC: nvme1n1server4 amraelp00007848   APP1 DISC: nvme2n1   APP2 DISC: nvme3n1   DOCKER DISC: nvme1n1server5 amraelp00007849   APP1 DISC: nvme2n1   APP2 DISC: nvme3n1    DOCKER DISC: nvme1nserver6 amraelp00007871   APP1 DISC: nvme2n1   APP2 DISC: nvme3n1    DOCKER DISC: nvme1n1Pre:umount /var/lib/dockerlvremove /dev/datavg/varlibdockervgreduce datavg /dev/nvme1n1vi /etc/fstabRM - /dev/mapper/datavg-varlibdocker /var/lib/docker ext4 defaults 1 2rmdir /var/lib/ -> dockermkdir /app/dockerln -s /app/docker /var/lib/dockerStart docker service after prepare_env_airflow_certs playbook run is completedClear content of /etc/sysconfig/docker-storage to DOCKER_STORAGE_OPTIONS="" to use deamon.json fileAnsible:ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server1 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server1 --vault-password-file=~/vault-password-fileCN_NAME=amraelp00007844.COMPANY.comSUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileCN_NAME=amraelp00007870.COMPANY.comSUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server3 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server3 --vault-password-file=~/vault-password-fileCN_NAME=amraelp00007847.COMPANY.comSUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server4 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server4 --vault-password-file=~/vault-password-fileCN_NAME=amraelp00007848.COMPANY.comSUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server5 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server5 --vault-password-file=~/vault-password-fileCN_NAME=amraelp00007849.COMPANY.comSUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●ansible-playbook prepare_env_gbl_us.yml -i inventory/prod_gblus/inventory --limit server6 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/prod_gblus/inventory --limit server6 --vault-password-file=~/vault-password-fileCN_NAME=amraelp00007871.COMPANY.comSUBJECT_ALT_NAME= IP - ●●●●●●●●●●●●●●Docker Version:amraelp00007844:root:[04:57 AM]:/home/morawm03> docker --versionDocker version 1.13.1, build b2f74b2/1.13.1amraelp00007870:root:[04:57 AM]:/home/morawm03> docker --versionDocker version 1.13.1, build b2f74b2/1.13.1amraelp00007847:root:[04:57 AM]:/home/morawm03> docker --versionDocker version 1.13.1, build b2f74b2/1.13.1amraelp00007848:root:[04:57 AM]:/home/morawm03> docker --versionDocker version 1.13.1, build b2f74b2/1.13.1amraelp00007849:root:[04:57 AM]:/home/morawm03> docker --versionDocker version 1.13.1, build b2f74b2/1.13.1amraelp00007871:root:[05:00 AM]:/home/morawm03> docker --versionDocker version 1.13.1, build b2f74b2/1.13.1Configure Registry Login (registry-gbicomcloud.COMPANY.com):ansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server1 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server3 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server4 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server5 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/prod_gblus/inventory --limit server6 --vault-password-file=~/vault-password-fileRegistry (manual config):  Copy certs: /etc/docker/certs.d/registry-gbicomcloud.COMPANY.com/ from (mdm-reltio-handler-env\\ssl_certs\\registry)  docker login registry-gbicomcloud.COMPANY.com (login on service account too)  user/pass: mdm/**** (check mdm-reltio-handler-env\\group_vars\\all\\secret.yml)Playbooks installation order:Install node_exporter (run on user with root access - systemctl node_exprter installation): ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus4 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus5 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_node_exporter.yml -i inventory/prod_gblus/inventory --limit prometheus6 --vault-password-file=~/vault-password-fileInstall Kafka ansible-playbook install_hub_broker_cluster.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-fileInstall Kafka TOPICS: ansible-playbook install_hub_broker_cluster.yml -i inventory/prod_gblus/inventory --limit kafka1 --vault-password-file=~/vault-password-fileInstall Mongo ansible-playbook install_hub_mongo_rs_cluster.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-fileInstall Kong ansible-playbook install_mdmgw_gateway_v1.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-fileUpdate KONG Config ansible-playbook update_kong_api_v1.yml -i inventory/prod_gblus/inventory --limit kong_v1_01 --vault-password-file=~/vault-password-fileVerification: openssl s_client -connect amraelp00007848.COMPANY.com:8443 -servername gbl-mdm-hub-us-prod.COMPANY.com -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/RootCA-G2.cer openssl s_client -connect amraelp00007849.COMPANY.com:8443 -servername gbl-mdm-hub-us-prod.COMPANY.com -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/RootCA-G2.cer openssl s_client -connect amraelp00007871.COMPANY.com:8443 -servername gbl-mdm-hub-us-prod.COMPANY.com -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/RootCA-G2.cerInstall EFK ansible-playbook install_efk_stack.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-fileInstall Promehtues services : mongo_exporter: ansible-playbook install_prometheus_mongo_exporter.yml -i inventory/prod_gblus/inventory --limit mongo3_exporter --vault-password-file=~/vault-password-file cadvisor: ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus4 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus5 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus6 --vault-password-file=~/vault-password-file sqs_exporter: ansible-playbook install_prometheus_stack.yml -i inventory/prod_gblus/inventory --limit prometheus6 --vault-password-file=~/vault-password-fileInstall Consul ansible-playbook install_consul.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file# After operation get SecretID from consul container. On the container execute the following command:$ consul acl bootstrapand copy it as mgmt_token to consul secrets.ymlAfter install consul step run update consul playbook with proper mgmt_token (secret.yml) in every execution for each node.Update Consul ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul1 --vault-password-file=~/vault-password-file -v ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul2 --vault-password-file=~/vault-password-file -v ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul3 --vault-password-file=~/vault-password-file -vSetup Mongo Indexes and Collections:Create Collections and Indexes\nCreate Collections and Indexes:\n entityHistory\n\n db.entityHistory.createIndex({country: -1}, {background: true, name: "idx_country"});\n db.entityHistory.createIndex({sources: -1}, {background: true, name: "idx_sources"});\n db.entityHistory.createIndex({entityType: -1}, {background: true, name: "idx_entityType"});\n db.entityHistory.createIndex({status: -1}, {background: true, name: "idx_status"});\n db.entityHistory.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n db.entityHistory.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n db.entityHistory.createIndex({"entity.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"});\n db.entityHistory.createIndex({"entity.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});\n db.entityHistory.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\n db.entityHistory.createIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\n db.entityHistory.createIndex({entityChecksum: -1}, {background: true, name: "idx_entityChecksum"});\n db.entityHistory.createIndex({parentEntityId: -1}, {background: true, name: "idx_parentEntityId"}); \n \n \n \n\n entityRelations\n db.entityRelations.createIndex({country: -1}, {background: true, name: "idx_country"});\n db.entityRelations.createIndex({sources: -1}, {background: true, name: "idx_sources"});\n db.entityRelations.createIndex({relationType: -1}, {background: true, name: "idx_relationType"});\n db.entityRelations.createIndex({status: -1}, {background: true, name: "idx_status"});\n db.entityRelations.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n db.entityRelations.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n db.entityRelations.createIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\n db.entityRelations.createIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\n db.entityRelations.createIndex({"relation.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"}); \n db.entityRelations.createIndex({"relation.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); \n db.entityRelations.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"}); \n db.entityRelations.createIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\n\n\n\n LookupValues\n db.LookupValues.createIndex({updatedOn: 1}, {background: true, name: "idx_updatedOn"});\n db.LookupValues.createIndex({countries: 1}, {background: true, name: "idx_countries"});\n db.LookupValues.createIndex({mdmSource: 1}, {background: true, name: "idx_mdmSource"});\n db.LookupValues.createIndex({type: 1}, {background: true, name: "idx_type"});\n db.LookupValues.createIndex({code: 1}, {background: true, name: "idx_code"});\n db.LookupValues.createIndex({valueUpdateDate: 1}, {background: true, name: "idx_valueUpdateDate"});\n\n\n ErrorLogs\n db.ErrorLogs.createIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\n db.ErrorLogs.createIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\n db.ErrorLogs.createIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\n db.ErrorLogs.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\n\tbatchEntityProcessStatus\n \tdb.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1}, {background: true, name: "idx_findByBatchNameAndSourceId"});\n\t db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1}, {background: true, name: "idx_EntitiesUnseen_SoftDeleteJob"});\n\t\tdb.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResult_ProcessingJob"});\n\t\tdb.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResultAll_ProcessingJob"});\n\n\n batchInstance\n\t\t- create collection\n\n\trelationCache\n\t\tdb.relationCache.createIndex({startSourceId: -1}, {background: true, name: "idx_findByStartSourceId"});\n\n DCRRequests\n db.DCRRequests.createIndex({type: -1, "status.name": -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\n db.DCRRequests.createIndex({entityURI: -1, "status.name": -1}, {background: true, name: "idx_entityURIStatusNameFind_SubmitVR"});\n db.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n \n entityMatchesHistory \n db.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1}, {background: true, name: "idx_findAutoLinkMatch_CleanerStream"});\n\n Connect ENV with Prometheus:Prometheus config\nnode_exporter\n - targets:\n - "amraelp00007844.COMPANY.com:9100"\n - "amraelp00007870.COMPANY.com:9100"\n - "amraelp00007847.COMPANY.com:9100"\n - "amraelp00007848.COMPANY.com:9100"\n - "amraelp00007849.COMPANY.com:9100"\n - "amraelp00007871.COMPANY.com:9100"\n labels:\n env: gblus_prod\n component: node\n \n\nkafka\n - targets:\n - "amraelp00007848.COMPANY.com:9101"\n labels:\n env: gblus_prod\n node: 1\n component: kafka\n - targets:\n - "amraelp00007849.COMPANY.com:9101"\n labels:\n env: gblus_prod\n node: 2\n component: kafka\n - targets:\n - "amraelp00007871.COMPANY.com:9101"\n labels:\n env: gblus_prod\n node: 3\n component: kafka\n \n \nkafka_exporter\n - targets:\n - "amraelp00007848.COMPANY.com:9102"\n labels:\n trade: gblus\n node: 1\n component: kafka\n env: gblus_prod\n - targets:\n - "amraelp00007849.COMPANY.com:9102"\n labels:\n trade: gblus\n node: 2\n component: kafka\n env: gblus_prod\n - targets:\n - "amraelp00007871.COMPANY.com:9102"\n labels:\n trade: gblus\n node: 3\n component: kafka\n env: gblus_prod \n \n \nComponents:\n jmx_manager\n - targets:\n - "amraelp00007848.COMPANY.com:9104"\n labels:\n env: gblus_prod\n node: 1\n component: manager\n - targets:\n - "amraelp00007849.COMPANY.com:9104"\n labels:\n env: gblus_prod\n node: 2\n component: manager\n - targets:\n - "amraelp00007871.COMPANY.com:9104"\n labels:\n env: gblus_prod\n node: 3\n component: manager \n \n jmx_event_publisher\n - targets:\n - "amraelp00007848.COMPANY.com:9106"\n labels:\n env: gblus_prod\n node: 1\n component: publisher\n - targets:\n - "amraelp00007849.COMPANY.com:9106"\n labels:\n env: gblus_prod\n node: 2\n component: publisher\n - targets:\n - "amraelp00007871.COMPANY.com:9106"\n labels:\n env: gblus_prod\n node: 3\n component: publisher\n \n jmx_reltio_subscriber\n - targets:\n - "amraelp00007848.COMPANY.com:9105"\n labels:\n env: gblus_prod\n node: 1\n component: subscriber\n - targets:\n - "amraelp00007849.COMPANY.com:9105"\n labels:\n env: gblus_prod\n node: 2\n component: subscriber\n - targets:\n - "amraelp00007871.COMPANY.com:9105"\n labels:\n env: gblus_prod\n node: 3\n component: subscriber\n \n jmx_batch_service\n - targets:\n - "amraelp00007848.COMPANY.com:9107"\n labels:\n env: gblus_prod\n node: 1\n component: batch_service\n - targets:\n - "amraelp00007849.COMPANY.com:9107"\n labels:\n env: gblus_prod\n node: 2\n component: batch_service\n - targets:\n - "amraelp00007871.COMPANY.com:9107"\n labels:\n env: gblus_prod\n node: 3\n component: batch_service\n \n batch_service_actuator\n - targets:\n - "amraelp00007848.COMPANY.com:9116"\n labels:\n env: gblus_prod\n node: 1\n component: batch_service\n - targets:\n - "amraelp00007849.COMPANY.com:9116"\n labels:\n env: gblus_prod\n node: 2\n component: batch_service\n - targets:\n - "amraelp00007871.COMPANY.com:9116"\n labels:\n env: gblus_prod\n node: 3\n component: batch_service\n \n \nsqs_exporter \n - targets:\n - "amraelp00007871.COMPANY.com:9122"\n labels:\n env: gblus_prod\n component: sqs_exporter\n\n \n \ncadvisor\n \n - targets:\n - "amraelp00007844.COMPANY.com:9103"\n labels:\n env: gblus_prod\n node: 1\n component: cadvisor_exporter\n - targets:\n - "amraelp00007870.COMPANY.com:9103"\n labels:\n env: gblus_prod\n node: 2\n component: cadvisor_exporter \n - targets:\n - "amraelp00007847.COMPANY.com:9103"\n labels:\n env: gblus_prod\n node: 3\n component: cadvisor_exporter \n - targets:\n - "amraelp00007848.COMPANY.com:9103"\n labels:\n env: gblus_prod\n node: 4\n component: cadvisor_exporter \n - targets:\n - "amraelp00007849.COMPANY.com:9103"\n labels:\n env: gblus_prod\n node: 5\n component: cadvisor_exporter \n - targets:\n - "amraelp00007871.COMPANY.com:9103"\n labels:\n env: gblus_prod\n node: 6\n component: cadvisor_exporter \n \n \nmongodb_exporter\n \n - targets:\n - "amraelp00007847.COMPANY.com:9120"\n labels:\n env: gblus_prod\n component: mongodb_exporter\n \n \nkong_exporter\n - targets:\n - "amraelp00007848.COMPANY.com:9542"\n labels:\n env: gblus_prod\n node: 1\n component: kong_exporter\n - targets:\n - "amraelp00007849.COMPANY.com:9542"\n labels:\n env: gblus_prod\n node: 2\n component: kong_exporter\n - targets:\n - "amraelp00007871.COMPANY.com:9542"\n labels:\n env: gblus_prod\n node: 3\n component: kong_exporter\n"
},
{
"title": "Configuration (gblus)",
"pageID": "164470073",
"pageLink": "/pages/viewpage.action?pageId=164470073",
"content": "Config file: gblmdm-hub-us-spec_v04.xlsxAWS ResourcesResource NameResource TypeSpecificationAWS RegionAWS Availability ZoneDependen onDescriptionComponentsHUBGWInterfaceGBL MDM US HUB nProd Svr1 amraelp00007334PFE-AWS-MULTI-AZ-DEV-us-east-1EC2r5.2xlargeus-east-1bEBS APP DATA MDM NPROD SVR1EBS DOCKER DATA MDM NPROD SVR1- Mongo -  no data redundancy for nProd- Disks:     Mount 50G - docker installation directory    Mount 1000GB - /app/ - docker applications local storageOS: Red Hat Enterprise Linux Server release 7.3 (Maipo)mongoEFKHUBoutboundGBL MDM US HUB nProd Svr2 amraelp00007335PFE-AWS-MULTI-AZ-DEV-us-east-1EC2r5.2xlargeus-east-1bEBS APP DATA MDM NPROD SVR2EBS DOCKER DATA MDM NPROD SVR2- Kafka and zookeeper - Kong and Cassandra- Disks:     Mount 50G - docker installation directory    Mount 500GB - /app/ - docker applications local storageOS: Red Hat Enterprise Linux Server release 7.3 (Maipo)KafkaZookeeperKongCassandraGWinboundEBS APP DATA MDM nProd Svr1EBS1000 GB XFSus-east-1bmount to /app on amraelp00007334EBS APP DATA MDM nProd Svr2EBS500 GB XFSus-east-1bmount to /app on amraelp00007335EBS DOCKER DATA MDM nProd Svr1EBS50 GB XFSus-east-1bmount to docker devicemapper on amraelp00007334EBS DOCKER DATA MDM nProd Svr2EBS50 GB XFSus-east-1bmount to docker devicemapper on amraelp00007335GBLMDMHUB US S3 Bucketgblmdmhubnprodamrasp100762S3us-east-1SSL cert for doiman domain gbl-mdm-hub-us-nprod.COMPANY.comCertificateDomain : domain gbl-mdm-hub-us-nprod.COMPANY.comDNS RecordDNSAddress: gbl-mdm-hub-us-nprod.COMPANY.comRolesNameTypePrivilegesMember ofDescriptionReqeusts IDProvided accessUNIX-IoD-global-mdmhub-us-nprod-computers-UUnix Computer ROLEAccess to hosts: GBL MDM US HUB nProd Svr1GBL MDM US HUB nProd Svr2Computer role including all MDM serversUNIX-GBLMDMHUB-US-NPROD-ADMIN-UUser Role- dzdo root - access to docker- access to docker-engine (systemctl) restart, stop, start docker engineUNIX-GBLMDMHUB-US-NPROD-COMPUTERS-UAdmin role to manage all resource on serversNSA-UNIX: 20200303065003900KUCR - GBL32099554iWARECP - GENDEL - GBL32134727iMORAWM03 - GBL32097468iUNIX-GBLMDMHUB-US-NPROD-HUBROLE-UUser Role- Read only for logs- dzdo docker ps * - list docker container- dzdo docker logs * - check docker container logs- Read access to /app/* - check  docker container logsUNIX-GBLMDMHUB-US-NPROD-COMPUTERS-Urole without root access, read only for logs and check docker status. It will be used by monitoringNSA-UNIX: 20200303065731900UNIX-GBLMDMHUB-US-NPROD-SEROLE-UUser Role- dzdo docker * UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-Uservice role - it will be used to run microservices  from Jenkins CD pipelineNSA-UNIX: 20200303070216948Service Account - GBL32099918imdmusnprUNIX-GBLMDMHUB-US-NPROD-READONLYUser Role- Read only for logs- Read access to /app/* - check  docker container logsUNIX-GBLMDMHUB-US-NPROD-COMPUTERS-UNSA-UNIX: 20200303070544951Ports - Security Group PFE-SG-GBLMDMHUB-US-APP-NPROD-001 Port ApplicationWhitelisted8443Kong (API proxy)ALL from COMPANY VPN9094Kafka - SASL_SSL protocolALL from COMPANY VPN9093Kafka - SSL protocolALL from COMPANY VPN2181ZookeeperALL from COMPANY VPN27017MongoALL from COMPANY VPN9999HawtIO - administration consoleALL from COMPANY VPN9200ElasticsearchALL from COMPANY VPN5601KibanaALL from COMPANY VPN9100 - 9125Prometheus exportersALL from COMPANY VPN9542Kong exporterALL from COMPANY VPN2376Docker encrypted communication with the daemonALL from COMPANY VPNOpen ports between Jenkins and AirflowRequest to Przemek.Puchajda@COMPANY.com and Mateusz.Szewczyk@COMPANY.com - this is required to open ports between WBS<>IOD blocked traffic ( the requests take some time to finish so request at the beginning) A connection is required from euw1z1dl039.COMPANY.com (●●●●●●●●●●●●●)                       to amraelp00008810.COMPANY.com (●●●●●●●●●●●●●) port 2376. This connection is between airflow and docker host to run gblus DAGs.                       to amraelp00008810.COMPANY.com (●●●●●●●●●●●●●) port 22. This connection is between airflow and docker host to run gblus DAGs.      2. A connection is required from the Jenkins instance (gbinexuscd01 - ●●●●●●●●●●●●●).                       to amraelp00008810.COMPANY.com (●●●●●●●●●●●●●) port 22. This connection is between Jenkins and the target host required for code deployment purposes.DocumentationService Account ( Jenkins / server access )http://btondemand.COMPANY.com/solution/160303162657677NSA - UNIX - user access to Servers:http://btondemand.COMPANY.com/solution/131014104610578InstructionsHow to add user access to UNIX-GBLMDMHUB-US-NPROD-ADMIN-Ulog in to http://btondemand.COMPANY.com/search NSA - UNIXuser access to Servers - http://btondemand.COMPANY.com/solution/131014104610578go to Request Manager -> Request Catalog Search NSAChoose NSA-UNIX NSA Requests for Unix.ContinueFill Formula Add user access details formualAccount Type-NSA-UNIXName-Morawski, MikolajAD Username-MORAWM03User Domain-EMEARequestID-20200310100151888Request Details BelowRoleName: YesDescription:requestorCommentsList: Hi Team,I created the request to add account (EMEAMORAWM03 to the ADMIN role on the following servers:amraelp00007334amraelp00007335Role name: UNIX-GBLMDMHUB-US-NPROD-ADMIN-U -> member of: UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-U -> NSA-UNIX: 20200303065003900Could you please verify if I provided all required information?Regards,MikolajaccessToSpecificServerList_roleLst_2: NobusinessJustificationList: MDM HUB Team access toGBL MDM US HUB nProd Svr1 (amraelp00007334) - PFE-AWS-MULTI-AZ-DEV-us-east-1andGBL MDM US HUB nProd Svr2 (amraelp00007335) - PFE-AWS-MULTI-AZ-DEV-us-east-1regarding Fletcher projectserverLocationList: Not ApplicablenisDomainOtherList: OtherroleGroupAccount_roleLst_6: Add to Role Group(s)roleGroupNameList: UNIX-GBLMDMHUB-US-NPROD-ADMIN-UaccountPrivilegeList_roleLst_7: Add PrivilegesaccountList_roleLst_8: UNIX group membershipunixGroupNameList: UNIX-GBLMDMHUB-US-NPROD-ADMIN-USubmit requestHow to add/create new Service Account with access to UNIX-GBLMDMHUB-US-NPROD-SEROLE-UService Account NameUNIX group namedetailsBTOnDemandLessons Learned mdmusnprmdmhubusnprService Account Name has to contain max 8 charactersGBL32099918iRE Requires Additional Information (GBL32099918i).msglog in to http://btondemand.COMPANY.com/search NSA - UNIXuser access to Servers - http://btondemand.COMPANY.com/solution/131014104610578go to Request Manager -> Request Catalog Search NSAChoose NSA-UNIX NSA Requests for Unix.ContinueFill FormulaNo -> LegacyYesExistingLegacyamraelp00007334amraelp00007335N/AOtherTo manage the Service account and Software for the MDM HUBIt will be used to run microservices from Jenkins CD pipelinePrimary: VARGAA08Secondary: TIRUMS05Service AccountService Account Name: UNIX group nameNPROD:mdmusnpr mdmhubusnpr - Service Account Name have to contain 8 charactersMDM HUB Service Account access (related to Docker microservices and Jenkins CD) forGBL MDM US HUB nProd Svr1 (amraelp00007334) - PFE-AWS-MULTI-AZ-DEV-us-east-1andGBL MDM US HUB nProd Svr2 (amraelp00007335) - PFE-AWS-MULTI-AZ-DEV-us-east-1regarding Fletcher projectHi Team,I am trying to create the request to create the Service Account for the following two servers. amraelp00007334amraelp00007335I want to provide the privileges for this Service Account:Role name: UNIX-GBLMDMHUB-US-NPROD-SEROLE-U -> member of: UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-U -> NSA-UNIX: 20200303070216948- dzdo docker * - folder access read/writeComputer role related: UNIX-IoD-global-mdmhub-us-nprod-computers-UCould you please verify if I provided all required information and this Request is correct?Regards,MikolajHow to open ports / create new Security Group - PFE-SG-GBLMDMHUB-US-APP-NPROD-001http://btondemand.COMPANY.com/solution/120906165824277To create a new security group:Create server Security Group and Open Ports on  SC queue Name: GBL-BTI-IOD AWS FULL SUPPORTlog in to http://btondemand.COMPANY.com/ go to Get Support Search for queue: GBL-BTI-IOD AWS FULL SUPPORTSubmit Request to this queue:RequestHi Team,Could you please create a new security group and assign it with two servers.GBL MDM US HUB nProd Svr1 (amraelp00007334) - PFE-AWS-MULTI-AZ-DEV-us-east-1andGBL MDM US HUB nProd Svr2 (amraelp00007335) - PFE-AWS-MULTI-AZ-DEV-us-east-1Please add the following owners:Primary: VARGAA08Secondary: TIRUMS05(please let me know if approval is required)New Security group Requested: PFE-SG-GBLMDMHUB-US-APP-NPROD-001Please Open the following ports:Port  Application Whitelisted8443 Kong (API proxy) ALL from COMPANY VPN9094 Kafka - SASL_SSL protocol ALL from COMPANY VPN9093 Kafka - SASL_SSL protocol ALL from COMPANY VPN2181 Zookeeper ALL from COMPANY VPN 27017 Mongo ALL from COMPANY VPN9999 HawtIO - administration console ALL from COMPANY VPN9200 Elasticsearch ALL from COMPANY VPN5601 Kibana ALL from COMPANY VPN9100 - 9125 Prometheus exporters ALL from COMPANY VPNApply this group to the following servers:amraelp00007334amraelp00007335Regards,MikolajThis will create a new Security Grouphttp://btondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=GBL32141041iThen these security groups have to be assigned to servers through the IOD portal by the Servers Owner.To open new ports:log in to http://btondemand.COMPANY.com/ go to Get Support Search for queue: GBL-BTI-IOD AWS FULL SUPPORTSubmit Request to this queue:RequestHi,Could you please modify the below security group and open the following port.NONPROD security group:Security group: PFE-SG-GBLMDMHUB-US-APP-NPROD-001Port: 2376(this port is related to Docker for encrypted communication with the daemon)The host related to this:amraelp00007334amraelp00007335Regards,MikolajCertificates ConfigurationKafka - GBL32139266i  GO TO:How to Generate JKS Keystore and Truststorekeytool -genkeypair -alias kafka.gbl-mdm-hub-us-nprod.COMPANY.com -keyalg RSA -keysize 2048 -keystore server.keystore.jks -dname "CN=kafka.gbl-mdm-hub-us-nprod.COMPANY.com, O=COMPANY, L=mdm_gbl_us_hub, C=US"keytool -certreq -alias kafka.gbl-mdm-hub-us-nprod.COMPANY.com -file kafka.gbl-mdm-hub-us-nprod.COMPANY.com.csr -keystore server.keystore.jksSAN:gbl-mdm-hub-us-nprod.COMPANY.comamraelp00007334.COMPANY.com●●●●●●●●●●●●amraelp00007335.COMPANY.com●●●●●●●●●●●●Crete guest_user for KAFKA - "CN=kafka.guest_user.gbl-mdm-hub-us-nprod.COMPANY.com, O=COMPANY, L=GBLMDMHUB-US-NONPROD-KAFKA, C=US":GO TO: How to Generate JKS Keystore and Truststorekeytool -genkeypair -alias guest_user -keyalg RSA -keysize 2048 -keystore guest_user.keystore.jks -dname "CN=kafka.guest_user.gbl-mdm-hub-us-nprod.COMPANY.com, O=COMPANY, L=GBLMDMHUB-US-NONPROD-KAFKA, C=US"keytool -certreq -alias guest_user -file kafka.guest_user.gbl-mdm-hub-us-nprod.COMPANY.com.csr -keystore guest_user.keystore.jksKong - GBL32144418iopenssl req -nodes -newkey rsa:2048 -sha256 -keyout gbl-mdm-hub-us-nprod.key -out gbl-mdm-hub-us-nprod.csrSubject Alternative Namesgbl-mdm-hub-us-nprod.COMPANY.comamraelp00007334.COMPANY.com●●●●●●●●●●●●amraelp00007335.COMPANY.com●●●●●●●●●●●●EFK - GBL32139762i  , GBL32144243iopenssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-log-management-gbl-us-nonprod.key -out mdm-log-management-gbl-us-nonprod.csr mdm-log-management-gbl-us-nonprod.COMPANY.comSubject Alternative Names mdm-log-management-gbl-us-nonprod.COMPANY.comgbl-mdm-hub-us-nprod.COMPANY.comamraelp00007334.COMPANY.com●●●●●●●●●●●●amraelp00007335.COMPANY.com●●●●●●●●●●●●openssl req -nodes -newkey rsa:2048 -sha256 -keyout mdm-esnode1-gbl-us-nonprod.key -out mdm-esnode1-gbl-us-nonprod.csr mdm-esnode1-gbl-us-nonprod.COMPANY.com - ElasticsearchSubject Alternative Names mdm-esnode1-gbl-us-nonprod.COMPANY.comgbl-mdm-hub-us-nprod.COMPANY.comamraelp00007334.COMPANY.com●●●●●●●●●●●●amraelp00007335.COMPANY.com●●●●●●●●●●●●Domain Configuration:Example request: GBL30514754i "Register domains "mdm-log-management*"log in to http://btondemand.COMPANY.com/getsupportWhat can we help you with? - Search for "Network Team Ticket"Select the most relevant topic - "DNS Request"Submit a ticket to this queue.Ticket Details:RequestHi,Could you please register the following domains:ADD the below DNS entry:========================mdm-log-management-gbl-us-nonprod.COMPANY.com              Alias Record to                             amraelp00007334.COMPANY.com[●●●●●●●●●●●●]gbl-mdm-hub-us-nprod.COMPANY.com                                        Alias Record to                             amraelp00007335.COMPANY.com[●●●●●●●●●●●●]Kind regards,MikolajEnvironment InstallationPre:rmdir /var/lib/ -> dockerln -s /app/docker /var/lib/dockerumount /var/lib/dockerlvremove /dev/datavg/varlibdockervgreduce datavg /dev/nvme1n1Clear content of /etc/sysconfig/docker-storage to DOCKER_STORAGE_OPTIONS="" to use deamon.json fileAnsible:ansible-playbook prepare_env_gbl_us.yml -i inventory/dev_gblus/inventory --limit server1 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/dev_gblus/inventory --limit server1 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_gbl_us.yml -i inventory/dev_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/dev_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_gbl_us.yml -i inventory/dev_gblus/inventory --limit server3 --vault-password-file=~/vault-password-fileansible-playbook prepare_env_airflow_certs.yml -i inventory/dev_gblus/inventory --limit server3 --vault-password-file=~/vault-password-filecopy daemon_docker_tls_overlay.json.j2 to /etc/docker/daemon.jsonFIX using - https://stackoverflow.com/questions/44052054/unable-to-start-docker-after-configuring-hosts-in-daemon-json$ sudo cp /lib/systemd/system/docker.service /etc/systemd/system/\n$ sudo sed -i 's/\\ -H\\ fd:\\/\\///g' /etc/systemd/system/docker.service\n$ sudo systemctl daemon-reload\n$ sudo service docker restartDocker Version:amraelp00007334:root:[10:10 AM]:/app> docker --versionDocker version 1.13.1, build b2f74b2/1.13.1amraelp00007335:root:[10:04 AM]:/app> docker --versionDocker version 1.13.1, build b2f74b2/1.13.1[root@amraelp00008810 docker]# docker --versionDocker version 19.03.13-ce, build 4484c46Configure Registry Login (registry-gbicomcloud.COMPANY.com):ansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit server1 --vault-password-file=~/vault-password-file - using ●●●●●●●●●●●●● root accessansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit server2 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit server3 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file - using ●●●●●●●●●●●● service accountansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-fileansible-playbook prepare_registry_config.yml -i inventory/dev_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-fileRegistry (manual config):  Copy certs: /etc/docker/certs.d/registry-gbicomcloud.COMPANY.com/ from (mdm-reltio-handler-env\\ssl_certs\\registry)  docker login registry-gbicomcloud.COMPANY.com (login on service account too)  user/pass: mdm/**** (check mdm-reltio-handler-env\\group_vars\\all\\secret.yml)Playbooks installation order:Install node_exporter:    ansible-playbook install_prometheus_node_exporter.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file    ansible-playbook install_prometheus_node_exporter.yml -i inventory/dev_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_node_exporter.yml -i inventory/dev_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-fileInstall Kafka  ansible-playbook install_hub_broker.yml -i inventory/dev_gblus/inventory --limit broker --vault-password-file=~/vault-password-fileInstall Mongo   ansible-playbook install_hub_db.yml -i inventory/dev_gblus/inventory --limit mongo --vault-password-file=~/vault-password-fileInstall Kong   ansible-playbook install_mdmgw_gateway_v1.yml -i inventory/dev_gblus/inventory --limit kong_v1_01 --vault-password-file=~/vault-password-fileUpdate KONG Config (IT NEEDS TO BE UPDATED ON EACH ENV (DEV, QA, STAGE)!!)  ansible-playbook update_kong_api_v1.yml -i inventory/dev_gblus/inventory --limit kong_v1_01 --vault-password-file=~/vault-password-file  Verification:    openssl s_client -connect amraelp00007335.COMPANY.com:8443 -servername gbl-mdm-hub-us-nprod.COMPANY.com -CAfile /mnt/d/dev/mdm/GBL_US_NPROD/root_inter/RootCA-G2.cerInstall EFK  ansible-playbook install_efk_stack.yml -i inventory/dev_gblus/inventory --limit efk --vault-password-file=~/vault-password-fileInstall FLUEND Forwarder (without this docker loggin may not work and docker commands will be blocked)  ansible-playbook install_fluentd_forwarder.yml -i inventory/dev_gblus/inventory --limit docker-services --vault-password-file=~/vault-password-fileInstall Promehtues services :  mongo_exporter:    ansible-playbook install_prometheus_mongo_exporter.yml -i inventory/dev_gblus/inventory --limit mongo_exporter1 --vault-password-file=~/vault-password-file  cadvisor:    ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file    ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-file ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus3 --vault-password-file=~/vault-password-file  sqs_exporter:     ansible-playbook install_prometheus_stack.yml -i inventory/dev_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file    ansible-playbook install_prometheus_stack.yml -i inventory/stage_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-file    ansible-playbook install_prometheus_stack.yml -i inventory/qa_gblus/inventory --limit prometheus1 --vault-password-file=~/vault-password-fileInstall Consul ansible-playbook install_consul.yml -i inventory/prod_gblus/inventory --vault-password-file=~/vault-password-file# After operation get SecretID from consul container. On the container execute the following command:$ consul acl bootstrapand copy it as mgmt_token to consul secrets.ymlAfter install consul step run update consul playbookUpdate Consul ansible-playbook update_consul.yml -i inventory/prod_gblus/inventory --limit consul1 --vault-password-file=~/vault-password-file -v Setup Mongo Indexes and Collections:Create Collections and Indexes\nCreate Collections and Indexes:\n entityHistory\n\n db.entityHistory.createIndex({country: -1}, {background: true, name: "idx_country"});\n db.entityHistory.createIndex({sources: -1}, {background: true, name: "idx_sources"});\n db.entityHistory.createIndex({entityType: -1}, {background: true, name: "idx_entityType"});\n db.entityHistory.createIndex({status: -1}, {background: true, name: "idx_status"});\n db.entityHistory.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n db.entityHistory.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n db.entityHistory.createIndex({"entity.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"});\n db.entityHistory.createIndex({"entity.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});\n db.entityHistory.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\n db.entityHistory.createIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\n db.entityHistory.createIndex({entityChecksum: -1}, {background: true, name: "idx_entityChecksum"});\n db.entityHistory.createIndex({parentEntityId: -1}, {background: true, name: "idx_parentEntityId"}); \n \n \n \n\n entityRelations\n db.entityRelations.createIndex({country: -1}, {background: true, name: "idx_country"});\n db.entityRelations.createIndex({sources: -1}, {background: true, name: "idx_sources"});\n db.entityRelations.createIndex({relationType: -1}, {background: true, name: "idx_relationType"});\n db.entityRelations.createIndex({status: -1}, {background: true, name: "idx_status"});\n db.entityRelations.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n db.entityRelations.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n db.entityRelations.createIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\n db.entityRelations.createIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\n db.entityRelations.createIndex({"relation.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"}); \n db.entityRelations.createIndex({"relation.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); \n db.entityRelations.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"}); \n db.entityRelations.createIndex({mdmSource: -1}, {background: true, name: "idx_mdmSource"});\n\n\n\n LookupValues\n db.LookupValues.createIndex({updatedOn: 1}, {background: true, name: "idx_updatedOn"});\n db.LookupValues.createIndex({countries: 1}, {background: true, name: "idx_countries"});\n db.LookupValues.createIndex({mdmSource: 1}, {background: true, name: "idx_mdmSource"});\n db.LookupValues.createIndex({type: 1}, {background: true, name: "idx_type"});\n db.LookupValues.createIndex({code: 1}, {background: true, name: "idx_code"});\n db.LookupValues.createIndex({valueUpdateDate: 1}, {background: true, name: "idx_valueUpdateDate"});\n\n\n ErrorLogs\n db.ErrorLogs.createIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\n db.ErrorLogs.createIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\n db.ErrorLogs.createIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\n db.ErrorLogs.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\n\tbatchEntityProcessStatus\n db.batchEntityProcessStatus.createIndex({batchName: -1, sourceId: -1}, {background: true, name: "idx_findByBatchNameAndSourceId"});\n db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, objectType: -1, sourceIngestionDate: -1}, {background: true, name: "idx_EntitiesUnseen_SoftDeleteJob"});\n db.batchEntityProcessStatus.createIndex({batchName: -1, deleted: -1, sendDateMDM: -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResult_ProcessingJob"});\n db.batchEntityProcessStatus.createIndex({batchName: -1, sendDateMDM: -1, updateDateMDM: -1}, {background: true, name: "idx_ProcessingResultAll_ProcessingJob"});\n\n batchInstance\n\t\t- create collection\n\n\trelationCache\n\t\tdb.relationCache.createIndex({startSourceId: -1}, {background: true, name: "idx_findByStartSourceId"});\n\n DCRRequests\n db.DCRRequests.createIndex({type: -1, "status.name": -1}, {background: true, name: "idx_typeStatusNameFind_TraceVR"});\n db.DCRRequests.createIndex({entityURI: -1, "status.name": -1}, {background: true, name: "idx_entityURIStatusNameFind_SubmitVR"});\n db.DCRRequests.createIndex({changeRequestURI: -1, "status.name": -1}, {background: true, name: "idx_changeRequestURIStatusNameFind_DSResponse"});\n \n entityMatchesHistory \n db.entityMatchesHistory.createIndex({_id: -1, "matches.matchObjectUri": -1, "matches.matchType": -1}, {background: true, name: "idx_findAutoLinkMatch_CleanerStream"});\n\n\n Connect ENV with Prometheus:Update config -  ansible-playbook install_prometheus_configuration.yml -i inventory/prod_gblus/inventory --limit prometheus2 --vault-password-file=~/vault-password-filePrometheus config\nnode_exporter\n - targets:\n - "amraelp00007334.COMPANY.com:9100"\n - "amraelp00007335.COMPANY.com:9100"\n labels:\n env: gblus_dev\n component: node\n\n\nkafka\n - targets:\n - "amraelp00007335.COMPANY.com:9101"\n labels:\n env: gblus_dev\n node: 1 \n component: kafka\n \n \nkafka_exporter\n\n - targets:\n - "amraelp00007335.COMPANY.com:9102"\n labels:\n trade: gblus\n node: 1\n component: kafka\n env: gblus_dev \n\n\nComponents:\n jmx_manager\n - targets:\n - "amraelp00007335.COMPANY.com:9104"\n labels:\n env: gblus_dev\n node: 1\n component: manager\n - targets:\n - "amraelp00007335.COMPANY.com:9108"\n labels:\n env: gblus_qa\n node: 1\n component: manager\n - targets:\n - "amraelp00007335.COMPANY.com:9112"\n labels:\n env: gblus_stage\n node: 1\n component: manager \n jmx_event_publisher\n - targets:\n - "amraelp00007334.COMPANY.com:9106"\n labels:\n env: gblus_dev\n node: 1\n component: publisher \n - targets:\n - "amraelp00007334.COMPANY.com:9110"\n labels:\n env: gblus_qa\n node: 1\n component: publisher \n - targets:\n - "amraelp00007334.COMPANY.com:9104"\n labels:\n env: gblus_stage\n node: 1\n component: publisher \n jmx_reltio_subscriber\n - targets:\n - "amraelp00007334.COMPANY.com:9105"\n labels:\n env: gblus_dev\n node: 1\n component: subscriber\n - targets:\n - "amraelp00007334.COMPANY.com:9109"\n labels:\n env: gblus_qa\n node: 1\n component: subscriber\n - targets:\n - "amraelp00007334.COMPANY.com:9113"\n labels:\n env: gblus_stage\n node: 1\n component: subscriber\n jmx_batch_service\n - targets:\n - "amraelp00007335.COMPANY.com:9107"\n labels:\n env: gblus_dev\n node: 1\n component: batch_service\n - targets:\n - "amraelp00007335.COMPANY.com:9111"\n labels:\n env: gblus_qa\n node: 1\n component: batch_service\n - targets:\n - "amraelp00007335.COMPANY.com:9115"\n labels:\n env: gblus_stage\n node: 1\n component: batch_service\n\nsqs_exporter \n - targets:\n - "amraelp00007334.COMPANY.com:9122"\n labels:\n env: gblus_dev\n component: sqs_exporter\n - targets:\n - "amraelp00007334.COMPANY.com:9123"\n labels:\n env: gblus_qa\n component: sqs_exporter\n - targets:\n - "amraelp00007334.COMPANY.com:9124"\n labels:\n env: gblus_stage\n component: sqs_exporter\n\n\ncadvisor\n\n - targets:\n - "amraelp00007334.COMPANY.com:9103"\n labels:\n env: gblus_dev\n node: 1\n component: cadvisor_exporter\n - targets:\n - "amraelp00007335.COMPANY.com:9103"\n labels:\n env: gblus_dev\n node: 2\n component: cadvisor_exporter \n\n\n \nmongodb_exporter\n\n - targets:\n - "amraelp00007334.COMPANY.com:9120"\n labels:\n env: gblus_dev\n component: mongodb_exporter\n \n\nkong_exporter\n - targets:\n - "amraelp00007335.COMPANY.com:9542"\n labels:\n env: gblus_dev\n component: kong_exporter\n"
},
{
"title": "Getting access to PDKS Rancher and Kubernetes clusters",
"pageID": "259433725",
"pageLink": "/display/GMDM/Getting+access+to+PDKS+Rancher+and+Kubernetes+clusters",
"content": "Go to https://requestmanager.COMPANY.com/#/Search nsa-unix and select first link (NSA-UNIX)You will see the form for requesting an access which should be fulfilled like on an example below: Do you need to be added to any Role Groups? YESDo you need privileged access to specific Servers in a Role Group? NOPlease provide the Server Location: Not applicableNIS Domain: Other Add to Role Group(s) UNIX-GBLMDMHUB-US-PROD-ADMIN-U or UNIX-GBLMDMHUB-US-NPROD-ADMIN-U (depends on an environment)Please provide information about Account Privileges: Add Privileges  Please choose the Type of Privilege to Add: UNIX group membershipPlease provide the UNIX Group Name:  UNIX-GBLMDMHUB-US-PROD-COMPUTERS-U or UNIX-GBLMDMHUB-US-NPROD-COMPUTERS-UPlease provide a brief Business Justification:For prod:atp-mdmhub-prod-ameratp-mdmhub-prod-emeaatp-mdmhub-prod-apacPDKS EKS clusters regarding project BoldMove.For nprod:atp-mdmhub-nprod-ameratp-mdmhub-nprod-emeaatp-mdmhub-nprod-apacPDKS EKS clusters regarding project BoldMove.Comments or Special Instructions:  I am creating this request to have an access to Global MDM HUB prod clusters. "
},
{
"title": "UI:",
"pageID": "308256633",
"pageLink": "/pages/viewpage.action?pageId=308256633",
"content": ""
},
{
"title": "Add new role and add users to the UI",
"pageID": "308256635",
"pageLink": "/display/GMDM/Add+new+role+and+add+users+to+the+UI",
"content": "MDM HUB UI roles standards:Here is the role standard that has to be used to get access to the UI by specific users:EnvironmentsNON-PRODPRODDEVQASTAGEPRODGBL****EMEA****AMER****APAC****GBLUS****ALL****Use the 'ALL' keyword with connection to the 'NON-PROD' and 'PROD' - using this approach will produce only 2 roles for the system.Role Schema:<prefix>_<tenant>_<system name>_<application>_<environment>_<system>_<suffix><prefix> - COMM<tenant> - ALL or GBL/AMER/EMEA e.t.c (recommendation is ALL)<system name> - MDMHUB <application> - UI <environment> - PROD / NON-PROD  or specific based on a table above<system> HUB_ADMIN / PTRS e.t.c Important: <system> name has to be in sync with HUB configuration users in e.g http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/users   <suffix> ROLEexample roles:HUB ADMIN → COMM_ALL_MDMHUB_UI_NON-PROD_HUB_ADMIN_ROLE - HUB UI group for hub-admin users - access to all clusters, and non-prod environments.HUB ADMIN → COMM_ALL_MDMHUB_UI_PROD_HUB_ADMIN_ROLE - HUB UI group for hub-admin users - access to all clusters, and prod environments.PTRS system → COMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and non-prod environments.PTRS system → COMM_ALL_MDMHUB_UI_PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and prod environments.The system is the user name used in HUB. All users related to the specific system can have access to the specific role.For example, if someone from the PTRS system wants to have access to the UI, how to process such request:Add user to existing UI roleGo to https://requestmanager1.COMPANY.com/Group/Default.aspxsearch a group:If a role is found in search results you can check current members or request a new memberadd a new user:savego to Cart https://requestmanager1.COMPANY.com/group/Review.aspxand submit the request.If the role does not exist:First, create a new role:click Create a NEW Security Grouphttps://requestmanager1.COMPANY.com/group/Create.aspx?type=secregion -EMEAname - the name of a group primary owner - AJsecondary owner  - Mikołaj MorawskiDescription - e.g. HUB UI group for hub-admin users - access to all clusters, and prod environments.now you can add users to this groupSecond, configure roles and access to the user in HUB:Important: <system> name has to be in sync with HUB configuration users in http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/users Users can have access to the following roles and APIs:https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.htmlUSER and ADMIN roles:MODIFY_KAFKA_OFFSET             - "/kafka/offset" allows modifying offset on specific Kafka topics related to the systemRESEND_KAFKA_EVENT               - "/jobs/hub/resend_events" - resend events to a specific topicUPDATE_IDENTIFIERS                 -   "/jobs/hub/update_identifiers" - starts update identifiers flowMERGE_UNMERGE_ENTITIES         - "/jobs/hub/merge_unmerge_entities" - starts merge unmerge flow REINDEX_ENTITIES                         - "/jobs/mdm/reindex_entities" - executes Reltio Reindex APICLEAR_CACHE_BATCH                  - "/jobs/hub/clear_batch_cache" - executes clear ETL batch cache operationHUB ADMIN roles:RESEND_KAFKA_EVENT_COMPLEX    - "/jobs/hub/resend_events" - resend events to a specific topic using complex API  RECONCILE                - "/jobs/hub/reconciliation_entities" - regenerates events to HUB using simple API - starts JOBRECONCILE_COMPLEX        - "/jobs/hub/reconciliation_entities_complex" - regenerates events to HUB using complex API - starts the jobLIST_PARTIALS                    - "/precallback/partials") - list or resubmit partials that stuck in the queueAdd roles and topics to the user:.e.g: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/users/ptrs.yamlPut "kafka" section with specific kafka topics:Add mdm admin section with specific roles and access to topics:e.g.     mdm_admin:      reconciliationTargets:        - emea-dev-out-full-ptrs-eu        - emea-dev-out-full-ptrs-global2        - emea-qa-out-full-ptrs-eu        - emea-qa-out-full-ptrs-global2        - emea-stag-out-full-ptrs-eu        - emea-stag-out-full-ptrs-global2        - gbl-dev-out-full-ptrs        - gbl-dev-out-full-ptrs-eu        - gbl-dev-out-full-ptrs-porind        - gbl-qa-out-full-ptrs-eu        - gbl-stage-out-full-ptrs        - gbl-stage-out-full-ptrs-eu        - gbl-stage-out-full-ptrs-porind      sources:        - ALL      countries:        - ALL      roles: &roles        - MODIFY_KAFKA_OFFSET        - RESEND_KAFKA_EVENT      kafka: *kafkaREMEMBER TO ADD: Add mdm_auth  section  this  will  start  the  UI  access.Without this section the UI will not show HUB Admin tools! mdm_auth: roles: *rolesThe mdm_auth section and roles there will cause the user will only see 2 pages in UI - in that case, MODIFY OFFSET and RESET_KAFKA_EVENTSWhen the roles and users are configured on the HUB end go to the first step and add selected users to the selected roles.Starting from this time any new e.g. PTRS user can be added to the COMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLE and will be able to log in to UI and see the pages and use API through UI."
},
{
"title": "Current users and roles",
"pageID": "347636361",
"pageLink": "/display/GMDM/Current+users+and+roles",
"content": "EnvironmentClientClusterRoleCOMPANY UsersHUB internal userNON-PRODMDMHUBALLCOMM_ALL_MDMHUB_UI_NON-PROD_HUB_ADMIN_ROLEALL HUB Team Members +Andrew.J.Varganin@COMPANY.comNishith.Trivedi@COMPANY.come.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/nprod/users/hub_admin.yamllPRODMDMHUBALLCOMM_ALL_MDMHUB_UI_PROD_HUB_ADMIN_ROLE    ALL HUB Team Members+Andrew.J.Varganin@COMPANY.comNishith.Trivedi@COMPANY.come.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/prod/users/hub_admin.yamlNON-PRODMDMETLALLCOMM_ALL_MDMHUB_UI_NON-PROD_MDMETL_ADMIN_ROLEAnurag.Choudhary@COMPANY.comShikha@COMPANY.comRaghav.Gupta@COMPANY.comKhushboo.Bharti@COMPANY.comManisha.Kansal@COMPANY.comAjit.Tiwari@COMPANY.comSayak.Acharya@COMPANY.comJeevitha.R@COMPANY.comPriya.Suthar@COMPANY.comJoymalya.Bhattacharya@COMPANY.comChinthamani.Kalebu@COMPANY.comArindam.Roy2@COMPANY.comNarendraSingh.Chouhan@COMPANY.comAdrita.Sarkar@COMPANY.comManish.Panda@COMPANY.comMeghana.Das@COMPANY.comHanae.Laroussi@COMPANY.comSomil.Sethi@COMPANY.comShivani.Jha@COMPANY.comPradnya.Raikar@COMPANY.comKOMAL.MANTRI@COMPANY.comAbsar.Ahsan@COMPANY.comAsmita.Datta@COMPANY.come.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/nprod/users/mdmetl_admin.yamlPRODMDMETLALLCOMM_ALL_MDMHUB_UI_PROD_MDMETL_ADMIN_ROLEAnurag.Choudhary@COMPANY.comShikha@COMPANY.comRaghav.Gupta@COMPANY.comKhushboo.Bharti@COMPANY.comManisha.Kansal@COMPANY.comAjit.Tiwari@COMPANY.comSayak.Acharya@COMPANY.comJeevitha.R@COMPANY.comPriya.Suthar@COMPANY.comJoymalya.Bhattacharya@COMPANY.comChinthamani.Kalebu@COMPANY.comArindam.Roy2@COMPANY.comNarendraSingh.Chouhan@COMPANY.comManish.Panda@COMPANY.comMeghana.Das@COMPANY.comHanae.Laroussi@COMPANY.comSomil.Sethi@COMPANY.comShivani.Jha@COMPANY.comPradnya.Raikar@COMPANY.comKOMAL.MANTRI@COMPANY.comAsmita.Datta@COMPANY.come.g. https://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/prod/users/mdmetl_admin.yamlNON-PRODPTRSALLCOMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLEsagar.bodala@COMPANY.comAishwarya.Shrivastava@COMPANY.comTanika.Das@COMPANY.comRishabh.Singh@COMPANY.comBhushan.Shanbhag@COMPANY.comHasibul.Mallik@COMPANY.comAbhinavMishra.Mishra@COMPANY.comAsmita.Mishra@COMPANY.comPrema.NayagiGS@COMPANY.come.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/nprod/users/ptrs.yamlPRODPTRSALLCOMM_ALL_MDMHUB_UI_PROD_PTRS_ROLEsagar.bodala@COMPANY.comAishwarya.Shrivastava@COMPANY.comTanika.Das@COMPANY.comRishabh.Singh@COMPANY.comBhushan.Shanbhag@COMPANY.comHasibul.Mallik@COMPANY.comAbhinavMishra.Mishra@COMPANY.comAsmita.Mishra@COMPANY.comPrema.NayagiGS@COMPANY.come.g. http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/emea/prod/users/ptrs.yamlNON-PRODCOMPANYALLCOMM_ALL_MDMHUB_UI_NON-PROD_COMPANY_ROLEnavaneel.ghosh@COMPANY.comhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/pull-requests/1707/diff#amer/nprod/users/COMPANY.ymlPRODCOMPANYALLCOMM_ALL_MDMHUB_UI_PROD_COMPANY_ROLEnavaneel.ghosh@COMPANY.comhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/pull-requests/1707/diff#amer/nprod/users/COMPANY.yml"
},
{
"title": "SSO and roles",
"pageID": "322564881",
"pageLink": "/display/GMDM/SSO+and+roles",
"content": "To login to UI dashboard You have to be in COMPANY network. sso authorization is made by SAML, using COMPANY pingfederate.Auth flowSSO loginSAML login roleAfter successful authentication with SAML we are receiving roles from Active Directory (Group Manager - distribution list)Then we are decoding roles using following regexp:COMM_(?<tenant>[A-Z]+)_MDMHUB_UI_(?<environment>NON-PROD|PROD)_(?<system>.+)_ROLEWhen role is matching environment and tenant we are getting roles by searching system in user configuration.Backend AD groupsServiceNPROD GroupPROD GroupDescriptionKibanaCOMM_ALL_MDMHUB_KIBANA_NON-PROD_ADMIN_ROLECOMM_ALL_MDMHUB_KIBANA_PROD_ADMIN_ROLECOMM_ALL_MDMHUB_KIBANA_NON-PROD_VIEWER_ROLECOMM_ALL_MDMHUB_KIBANA_PROD_VIEWER_ROLEGrafanaCOMM_ALL_MDMHUB_GRAFANA_PROD_ADMIN_ROLECOMM_ALL_MDMHUB_GRAFANA_PROD_VIEWER_ROLEAkhqCOMM_ALL_MDMHUB_KAFKA_NON-PROD_ADMIN_ROLECOMM_ALL_MDMHUB_KAFKA_PROD_ADMIN_ROLECOMM_ALL_MDMHUB_KAFKA_NON-PROD_VIEWER_ROLECOMM_ALL_MDMHUB_KAFKA_PROD_VIEWER_ROLEMonitoringCOMM_ALL_MDMHUB_ALL_NON-PROD_MON_ROLECOMM_ALL_MDMHUB_ALL_PROD_MON_ROLEThis groups aggregates users that are responsible for monitoring of MDMHUB AirflowCOMM_ALL_MDMHUB_AIRFLOW_NON-PROD_ADMIN_ROLECOMM_ALL_MDMHUB_AIRFLOW_PROD_ADMIN_ROLECOMM_ALL_MDMHUB_AIRFLOW_NON-PROD_VIEWER_ROLECOMM_ALL_MDMHUB_AIRFLOW_PROD_VIEWER_ROLE"
},
{
"title": "UI Connect Guide",
"pageID": "322540727",
"pageLink": "/display/GMDM/UI+Connect+Guide",
"content": "Log in to UI and switch TenantsTo log in to UI please use the following link: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ui-emea-devLog in to UI using your COMPANY credentials:There is no need to know each UI address, you can easily switch between Tenants using the following link (available on the TOP RIGHT corner in UI near the USERNAME):What pages are available with the default VIEW roleBy default, you are logged in with the default VIEW role, the following pages are available:HUB StatusYou can use the HUB Dashboard main page that contains HUB platform status: Event processing details, Snowflake refresh time, started batches and ETA to load data to Reltio or get Events from Reltio.Ingestion Services ConfigurationThis page contains the documentation related to the Data Quality checks, Source Match Categorization, Cleansing & Formatting, Auto-Fills, and Minimum Viable Profile Checks.You can choose a filter to switch between different entity types and use input boxes to filter results.You can use the 'Category' filter to include the operations that you are interested inYou can use the 'Query' filter and put any text to find what you are looking for (e.g. 'prefix' to find rules with prefix word)You can use the 'Date' filter to find rules created or updated after a specific time - now using this filter you can easily find the rules added after data reload and reload data one more time to reflect changes. This page contains also documentation related to duplicate identifiers and noise lists.You can choose a  filter to switch between different entity types and use input boxes to filter resultsIngestion Services TesterThis page contains the JSON tester, input JSON and click the 'Test' button to check the output JSON with all rules appliedClick the 'Difference' to get only changed sectionsClick the 'Validation result' to get the rules that were executed.More details here: HUB UI User GuideWhat operations are available in the UIAs a user, you can request access to the technical operations in HUB. The details on how to access more operations are described in the section below.Here you will get to know the different UI operations and what can be done using these operations:HUB Admin allows to:Kafka OffsetTechnical operationOn this page user can modify Kafka offset on specific consumer groupSystem/User that wants to have access to this page will be allowed to maintain the consumer group offset, change to:latestearliestspecific date timeshift by a specific number of events.HUB ReconciliationTechnical operationUsed internally by HUB Team.This operation allows us to mimic Reltio events generation - this operation generates the events to the input HUB topic so that we can reprocess the events.You can use this page and generates events by:provide an input array with entity/relation URIsorprovide the query and select the source/market that you want to reprocess.Kafka Republish EventsTechnical operationThis operation can be used to generate events for your Kafka topicUse case - you are consuming data from HUB and you want to test something on non-prod environments and consume events for a specific market one more time. You want to receive 1000 events for France market for your testing.You can use this page and generates events for the target topic:Specify the Countries/Sources/Limits/Dates and Target Reconciliation topic - as a result, you will receive the events.Reltio ReindexTechnical operationThis operation executes the Reltio Reindexing operationYou can use this page and generates events by:provide the query and select the source/market that you want to reprocess.orprovide the input file with entity/relation URIs, that will be sent to Reltio API.Merge/Unmerge EntitiesBusiness operationThis operation consumes the input file and executes the merge/unmerge operations in ReltioMore details about the file and process are described here: Batch merge & unmergeUpdate IdentifiersBusiness operationThis operation consumes the input file and executes the merge/unmerge operations in ReltioMore details about the file and process are described here: Batch update identifiersClear CacheBusiness operationClear ETL Batch CacheMore details about the file and process are described here: Batch clear ETL data load cacheHow to request additional access to new operationsPlease send the following email to the HUB DL: DL-ATP_MDMHUB_SUPPORT@COMPANY.comSubject:HUB UI - Access request for <user-name/system-name>Body:Please provide the access / update the existing access for <user-name/system-name> to HUB Admin operations.IDDetailsComments:1Action neededAdd user to the HUB UIEdit user in the HUB UI (please provide the existing group name)<any other>2TenantGBL, EMEA, AMER, GBLUS, APAC/ALLTenant - more details in EnvironmentsBy default please select ALL Tenants, but if you need access only to a specified one please select.3Environments PROD / NON-PROD  or specific: DEV/QA/STAGE/PRODBy default please select PROD / NON-PROD environments, but if you need access only to a specified one please select.4Permissions rangeChoose the operation:Kafka OffsetHUB ReconciliationKafka Republish EventsReltio ReindexMerge/Unmerge EntitiesUpdate IdentifiersClear Cache5COMPANY TeamETL/COMPANY or DSR or Change Management e.t.c8Business justificationNeeds access to execute merge unmerge operation in EMEA/AMER/APAC PROD Reltio9Point of contactIf you are from the system please provide the DL email and system details.7Sources<optional  - list of sources to which user should have access>required in Events/Reindex/Reconciliation operations3Countries<optional  - list of countries to which user should have access>required in Events/Reindex/Reconciliation operationsThe request will be processed after Andrew.J.Varganin@COMPANY.com approval. In the response, you will receive the Group Name. Please use this for future reference.e.g. PTRS system roles used in the PTRS system to manage UI operations.   PTRS system → COMM_ALL_MDMHUB_UI_NON-PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and non-prod environments.   PTRS system → COMM_ALL_MDMHUB_UI_PROD_PTRS_ROLE - HUB UI group for PTRS users - access to all clusters, and prod environments.HUB Team will use the following SOP to add you to a selected role: Add a new role and add users to the UIGet HelpIn case of any questions, the GetHelp page or full HUB documentation is available here (UI page footer):GetHelpWelcome to the Global MDM Home!"
},
{
"title": "Users:",
"pageID": "302705550",
"pageLink": "/pages/viewpage.action?pageId=302705550",
"content": ""
},
{
"title": "Add Direct API User to HUB",
"pageID": "273694347",
"pageLink": "/display/GMDM/Add+Direct+API+User+to+HUB",
"content": "To add a new user to MDM HUB direct API a few steps must be done. That document describes what activities must be fulfilled and who is responsible fot them.Create PingFederate user - client's responsibility  If the client's authentication method is oauth2 then there is a need to create PingFederate user.To add a user you must have a Ping Federate user created: How to Request PingFederate (PXED) External OAuth 2.0 Account Caution: If the authentication method is key auth then HUB Team generates it and sends it securely way to the client.Send a request to MDM HUB that contains all necessary data - client's responsibility Send a request to create a new user with direct API access to HUB Team: dl-atp_mdmhub_support@COMPANY.comThe request must contain as follows:1Action needed2PingFederate username3Countries4Tenant5Environments6Permissions range7Sources8Business justification9Point of contact10GatewayDescriptionAction needed this is a place where you decide if you want to create a new user or modify the existing one.PingFederate username you need to create a user on the PingFederate side. Its username is crucial to authenticate on the HUB side. If you do not have a PingFederate user please check: https://confluence.COMPANY.com/display/GMDM/How+to+request+PingFederate+%28PXED%29+external+OAuth+2.0+accountCountries - list of countries that access to will be grantedTenant a tenant or list of tenants where the user will be created. Please notice that if you have a connection from open internet only EMEA is possible. If you have a local application split to Reltio Region it is recommended to request a local tenant. If you have a global solution you can call EMEA and your requests will be routed by HUB.Environments list of environment instances DEV/QA/STG/PRODPermissions range do you need to write or read/write? To which entities do you need access? HCO/HCP/MCOSources to which sources do you need to have access?Business justification please describeWhy do you have a connection with HUB?Why the user must be created/modified?Whats the project name?Whos the project manager?Point of contact please add a DL group name - in case of any issues connected with that userWhich API you want to call: EMEA, AMER, APAC,etcPrepare new user on MDM HUB side - HUB Team Responsibility Store clients' request in dedicated confluence space: ClientsIn the COMPANY tenants, there is a need to connect the new user with API Router directly.Change API router configuration, and add a new user with:user PingFederate name or when the user uses key auth add API key to secrets.yamlsourcescountriesrolesChange Manager configuration, addsourcescountriesChange DCR service configuration - if applicabledcrServiceConfig-  initTrackingDetailsStatus, initTrackingDetail, dcrTyperoles - CREATE_DCR, GET_DCRYou need to check how the request will be routed. If there is a  need to make a routing configuration, follow these steps:change API Router configuration by adding new countries to proper tenantschange Manager configuration in destinated tenant by addingsourcescountries"
},
{
"title": "Add External User to MDM Hub",
"pageID": "164470196",
"pageLink": "/display/GMDM/Add+External+User+to+MDM+Hub",
"content": "Kong configurationFirstly You need to have users logins from Ping Federate for every envGo folder inventory/{{ kong_env }}/group_vars/kong_v1 in repository mdm-hub-env-configFind section PLUGINS in file kong_{{ env }}.yml and then rule with name mdm-external-oauthin this section find "users_map"add there new entry with following rule:\n- "<user_name_from_ping_federate>:<user_name_in_mdm_hub>"\nchange False to True in create_or_update setting for this rule\ncreate_or_update: True\nRepeat this steps( a-c ) for every environment {{ env }} you want to apply changes to(e.g., dev, qa, stage){{ kong_env }} - environment on which kong instance is deployed{{ env }} - environment on which MDM Hub instance is deployedkong_envenvdevdev, mapp, stageprodproddev_gblusdev_gblus, qa_gblus, stage_gblusprod_gblusprod_gblusdev_usdev_usprod_usprod_usGo to folder inventory/{{ env }}/group_vars/gw-servicesIn file gw_users.yml add section with new user after last added user, specify roles and sources needed for this user. E.g.,User configuration\n- name: "<user_name_in_mdm_hub>"\n description: "<Some description>"\n defaultClient: "ReltioAll"\n getEntityUsesMongoCache: yes\n lookupsUseMongoCache: yes\n roles:\n - <specify_only_roles_that_are_required_for_this_user>\n countries:\n - US\n sources: \n\t- <specify_only_sources_needed by this user>\nRepeat this step for every environment {{ env }} you want to apply changes to( e.g., dev, qa, stage)After configuration changes You need to update kong using following commandfor nonprod gblus envsGBLUS NPROD - kong update\nansible-playbook update_kong_api_v1.yml -i inventory/dev_gblus/inventory --limit kong_v1_01 --vault-password-file=~/ansible.secret\nfor prod gblus envGBLUS PROD - kong update\nansible-playbook update_kong_api_v1.yml -i inventory/prod_gblus/inventory --limit kong_v1_01 --vault-password-file=~/ansible.secret\nfor nprod gbl envsGBL NPROD - kong update\nansible-playbook update_kong_api_v1.yml -i inventory/dev/inventory --vault-password-file=~/ansible.secret\nfor prod gbl envGBL PROD - kong update\nansible-playbook update_kong_api_v1.yml -i inventory/prod/inventory --vault-password-file=~/ansible.secret\nfor nprod US envUS NPROD - kong update\nansible-playbook update_kong_api_v1.yml -i inventory/dev_us/inventory --vault-password-file=~/ansible.secret\nfor prod USUS PROD - kong update\nansible-playbook update_kong_api_v1.yml -i inventory/prod_us/inventory --vault-password-file=~/ansible.secret\nTroubleshootingIn case when there will be a problem with deploying You need to set create_or_update as True also for route and manager service.Ansible secretTo use this script You need to have ansible.secret file created in your home directory or adjust vault-password-file if needed.Another option is to change --vault-password-file to --ask-vault and provide ansible vault during the runtime.Before commiting changes find all occurrences where You set create_or_update to true and change it again to:\ncreate_or_update: False\nThen commit changesRedeploy gateway services on all modified envs. Before deploying please verify if there is no batch running in progressJenkins job to deploy gateway services:https://jenkins-gbicomcloud.COMPANY.com/job/mdm-gateway/"
},
{
"title": "Add new Batch to HUB",
"pageID": "310944945",
"pageLink": "/display/GMDM/Add+new+Batch+to+HUB",
"content": "To add a new batch to MDM HUB  a few steps must be done. That document describes what activities must be fulfilled and who is responsible for them.Check source and country configurationThe first step is to check if DQ rules and SMC are configured for the new source. Repository: mdm-config-registry; Path: \\config-hub\\<env_tenant>\\mdm-manager\\quality-service\\quality-rules\\If not you have to immediately send an email to a person that requested a new batch. This condition is usually performed on a separate task as prerequisite to adding the batch configuration."This is a new source. You have to send DQ and SMC requirements for a new source to A.J. and Eleni. Based on it a new HUB requirement deck will be prepared. When we received it the task can be planned. Until that time the task is blocked." The same exercise has to be made when we get requirements for a new country.Authorization and authenticationClients use mdmetl batch service user to populate data to Reltio. There is no changes needed.Send a request to MDM HUB that contains all necessary data - client's responsibility Send a request to create a new batch to HUB Team: dl-atp_mdmhub_support@COMPANY.comThe request must contain as follows:subject arealist of stages HCP/HCO/Affiliationsdata sourcecountries listsource namebatch namefile typefull/incrementalfrequencybussines justificationsingle point of contact on client sidePrepare new batch on MDM HUB side - HUB Team Responsibility Repository: mdm-hub-cluster-envChanges on manager levelIn mdmetl.yaml configuration must be extended with:Path: \\<tenant>\\<env>\\users\\mdmetl.yamlNew sourcesNew countriesAdd new batch with stages to batch_service, example:batch_service: defaultClient: "ReltioAll" description: "MDMETL Informatica IICS User - BATCH loader" batches: "ONEKEY": <- new batch name - "HCPLoading" <- new stage - "HCOLoading" <- new stage - "RelationLoading" <- new stageIn the MDM manager config, if the batch includes RelationLoading stage then add to the refAttributesEnricher configuration relationType: ProviderAffiliationsrelationType: ContactAffiliationsrelationType: ACOAffiliationsNew sourcesNew countriesChanges in batch-service levelBased on stages that are adding there is a need to change a batch-service configuration.Path: \\<tenant>\\<env>\\namespaces\\<namespace>\\config_files\\batch-service\\config\\application.ymlAdd configuration in BatchWorkflows, example:- batchName: "PFORCERX_ODS" batchDescription: "PFORCERX_ODS - HCO, HCP, Relation entities loading" stages: - stageName: "HCOLoading" - stageName: "HCOSending" softDependentStages: [ "HCOLoading" ] processingJobName: "SendingJob" - stageName: "HCOProcessing" dependentStages: [ "HCOSending" ] processingJobName: "ProcessingJob" # -------------------------------- - stageName: "HCPLoading" - stageName: "HCPSending" softDependentStages: [ "HCPLoading" ] processingJobName: "SendingJob" - stageName: "HCPProcessing" dependentStages: [ "HCPSending" ] processingJobName: "ProcessingJob" # ------------------ - stageName: "RelationLoading" - stageName: "RelationSending" dependentStages: [ "HCOProcessing", "HCPProcessing" ] softDependentStages: [ "RelationLoading" ] processingJobName: "SendingJob" - stageName: "RelationProcessing" dependentStages: [ "RelationSending" ] processingJobName: "ProcessingJob"If batch is full load than two additional stages must be configured, it destination is to allows deletating profiles:- stageName: "EntitiesUnseenDeletion" dependentStages: [ "HCOProcessing" ] processingJobName: "DeletingJob"- stageName: "HCODeletesProcessing" dependentStages: [ "EntitiesUnseenDeletion" ] processingJobName: "ProcessingJob"2. Add configuration to bulkConfiguration, example:"PFORCERX_ODS": HCOLoading: bulkLimit: 25 destination: topic: "${env}-internal-batch-pforcerx-ods-hco" maxInFlightRequest: 5 HCPLoading: bulkLimit: 25 destination: topic: "${env}-internal-batch-pforcerx-ods-hcp" maxInFlightRequest: 5 RelationLoading: bulkLimit: 25 destination: topic: "${env}-internal-batch-pforcerx-ods-rel" maxInFlightRequest: 5All new dedicated topic must be configured. There is a need to add configuration in kafka-topics.yml, example:emea-prod-internal-batch-pulse-kam-hco: partitions: 6 replicas: 33. Add configuration in sendingJob, example:PFORCERX_ODS: HCOSending: source: topic: "${env}-internal-batch-pforcerx-ods-hco" maxInFlightRequest: 5 bulkSending: false bulkPacketSize: 10 reltioRequestTopic: "${env}-internal-async-all-mdmetl-user" reltioReponseTopic: "${env}-internal-async-all-mdmetl-user-ack" HCPSending: source: topic: "${env}-internal-batch-pforcerx-ods-hcp" maxInFlightRequest: 5 bulkSending: false bulkPacketSize: 10 reltioRequestTopic: "${env}-internal-async-all-mdmetl-user" reltioReponseTopic: "${env}-internal-async-all-mdmetl-user-ack" RelationSending: source: topic: "${env}-internal-batch-pforcerx-ods-rel" maxInFlightRequest: 5 bulkSending: false bulkPacketSize: 10 reltioRequestTopic: "${env}-internal-async-all-mdmetl-user" reltioReponseTopic: "${env}-internal-async-all-mdmetl-user-ack"4. If a batch is full load then deletingJob must be configured, for example:PULSE_KAM: EntitiesUnseenDeletion: maxDeletesLimit: 10000 queryBatchSize: 10 reltioRequestTopic: "${env}-internal-async-all-mdmetl-user" reltioResponseTopic: "${env}-internal-async-all-mdmetl-user-ack""
},
{
"title": "How to Request PingFederate (PXED) External OAuth 2.0 Account",
"pageID": "263491721",
"pageLink": "/display/GMDM/How+to+Request+PingFederate+%28PXED%29+External+OAuth+2.0+Account",
"content": "This instruction describes the Client steps that should be triggered to create the PingFederate account. Referring to security requirements HUB should only know the details about the UserName created by the PXED Team. HUB is not requesting external accounts, passwords and all the details are shared only with the Client. The client is sharing the user name to HUB and only after the User name is configured Client will gain the access to HUB resources. Contact Persons:Varganin, A.J. <Andrew.J.Varganin@COMPANY.com> / DL-ATP_MDMHUB_SUPPORT@COMPANY.com - All details related to VCAS Reference number,CMDB ID (Production Deployment),IPRM Solution profile number and other details. PingFederate (PXED) - DL-CIT-PXED Operations <DL-CIT-PXEDOperations@COMPANY.com>; Zhang, Christine <Christine.Zhang@COMPANY.com>Details required to fulfill the PXED request are in this doc:User Name standard: <SYSTEM_NAME>-MDM_clientSteps:Go to https://requestmanager.COMPANY.com/#/In Search For Application type: PXED Pick - Application enablement with enterprise authentication services (PXED, LDAP and/or SSO)Fulfill the request and send.Wait for the user name and passwordAfter confirmation share the Client Id with HUB and wait for the grant of access. Do not share the password. EXAMPLE: For the Reference Example request send for PFORCEOL user:Request TicketGBL32702829iTicket IDNameVarganin, Andrew JosephRequested user nameAD UsernameVARGAA08Requested user IdUser DomainAMERRegion (AMER/EMEA/APAC/US...)Request ID20200717112252425request IDHosting locationExternalHosting location of the Client services: (External or  Internal COMPANY Network)VCAS Reference numberV...VCAS Reference numberData FeedNo, API/Servicesflow - requests send to HUB API then - API/ServicesApplication access methodsWeb BrowserType of access for the Client application - (Intranet/Web Browser e.t.c) Application User baseCOMPANY colleaguesContractorsApplication User baseApplication access devicesLaptop/DesktopTablets (iPad/Android/Windows)Application access devicesApplication Access LocationsInternetLocation (External - Internet / Internal - Intranet)Application Name<EXAMPLE: PFORCEOL (BIOPHARMA)>Requested application name that requires new accountCMDB ID (Production Deployment)SC....CMDB ID (Production Deployment)IPRM Solution profile number....IPRM Solution profile numberNumber of users for the application...Number of users for the applicationConcurrent Users....Concurrent UsersCommentsApplication-to-Application Integration using NSA (Non-Standard Service Account.)  PTRS will use REST APIs to authenticate to and access COMPANY Global MDM Services.This application will access MDM API Services (MDM_client) and will need OAuth2 account (KOL-MDM_client) for access to those APIs/Servicesfull description of requested account and integrationApplication ScopeAll UsersApplication ScopeReferenced tickets (only for example / reference purposes):https://btondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=GBL32702829ihttps://requestmanager.COMPANY.com/#/request/20201208091510997"
},
{
"title": "Hub Operations",
"pageID": "302705582",
"pageLink": "/display/GMDM/Hub+Operations",
"content": ""
},
{
"title": "Airflow:",
"pageID": "164470119",
"pageLink": "/pages/viewpage.action?pageId=164470119",
"content": ""
},
{
"title": "Checking that Process Ends Correctly",
"pageID": "164470118",
"pageLink": "/display/GMDM/Checking+that+Process+Ends+Correctly",
"content": "To check that process ended without any issues you need to login into Prometheus and check the Alerts Monitoring PROD dashboard. You have to check rows in the GBL PROD Airflow DAG's Status panel. If you can see red rows (like on blow screenshot) it means that there occured some issues:Details of issues are available in the Airflow."
},
{
"title": "Common Problems",
"pageID": "164470117",
"pageLink": "/display/GMDM/Common+Problems",
"content": "Failed task getEarliestUploadedFileDuring reviewing of failed DAG you noticed that the task getEarliestUploadedFile has failed state. In the task's logs you can see the line like this:[2020-03-19 18:44:07,082] {{docker_operator.py:252}} INFO - Unable to find the earliest uploaded file. S3 directory is empty?The issue is because getEarliestUploadedFile was not able to download the export file. In this case you need to check the S3 localtion and verify that the correct export file was uploded to valid location."
},
{
"title": "Deploy Airflow Components",
"pageID": "164470010",
"pageLink": "/display/GMDM/Deploy+Airflow+Components",
"content": "Deployment procedure is implemented as ansible playbook. The source code is stored in MDM Environment configuration repository. The runnable file is available under the path:  https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/install_mdmgw_airflow_services.yml and can be run by the command: ansible-playbook install_mdmgw_airflow_services.yml -i inventory/[env name]/inventory  Deployment has following steps: Creating directory structure on execution host, Templating configuration files and transferring those to config location, Creating DAG, variable and connections in Apache Airflow, Restarting Airflow instance to apply configuration changes. After successful deployment the dag and configuration changes should be available to trigger in Airflow UI. "
},
{
"title": "Deploying DAGs",
"pageID": "164469947",
"pageLink": "/display/GMDM/Deploying+DAGs",
"content": "To deploy newly created DAG or configuration changes you have to run the deployment procedure implemented as ansible playbook install_mdmgw_airflow_services.yml:ansible-playbook install_mdmgw_airflow_services.yml -i inventory/[env name]/inventoryIf you you have access to Jenkins you can also use jenkins' jobs: https://jenkins-gbicomcloud.COMPANY.com/job/MDM_Airflow_Deploy_jobs/. Each environment has its own deploy job. Once you choose the right job you have to:1 Click the button "Build Now": 2 After a few seconds the stage icon "Choose dags to deploy" will be active and will wait for choosing DAG to deploy:3 Choose the DAG you wanted to deploy and approve you decision.After this job will deploy all changes made by you to Airflow's server."
},
{
"title": "Error Grabbing Grapes - hub_reconciliation_v2",
"pageID": "218438556",
"pageLink": "/display/GMDM/Error+Grabbing+Grapes+-+hub_reconciliation_v2",
"content": "In hub_reconciliation_v2 airflow dag, during stage  entities_generate_hub_reconciliation_events grape error might occur:\norg.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:\nGeneral error during conversion: Error grabbing Grapes\n(...)\nCause:That could be caused by connectivity/configuration issues.Workaround:For this dag dependencies are mounted in container. Mounted directory is located in airflow server on path: /app/airflow/{{ env_name }}/hub_reconciliation_v2/tmp/.groovy/grapes/To solve this problem copy libs from working dag. E.g. hub_reconciliation_v2_gblus_prod \namraelp00007847.COMPANY.com/app/airflow/gblus_prod/hub_reconciliation_v2/tmp/.groovy/grapes\n"
},
{
"title": "Batches (Batch Service):",
"pageID": "302705680",
"pageLink": "/pages/viewpage.action?pageId=302705680",
"content": ""
},
{
"title": "Adding a New Batch",
"pageID": "164469956",
"pageLink": "/display/GMDM/Adding+a+New+Batch",
"content": "1. Add batch to batch_service.yml in the following sections- add batch info to section batchWorkflows - add basing on some already defined- add bulk configuration- add to sendingJob- add to deletingJob if needed2. Add source and user for batch to batch_service_users.yml- add for user mdmetl_nprod apropriate source and batch3. Add user to:for GBL / GBLUS - /inventory/<env>/group_vars/gw-services/gw_users.ymlfor EMEA / AMER / APAC - /config_files/<env>manager/config/users- for appropriate source, country and roles4. Add topic to bundle section in manager/config/application.yml 5. Add kafka topicsWe use kafka manager to add new topics which can be found under directory /inventory/<env>/group_vars/kafka/manager/topics.ymlFirstly set create_or_update to True after creation of topics change to False7. Create topics and redeploy services by using Jenkinshttps://jenkins-gbicomcloud.COMPANY.com/job/mdm-gateway/8. Redeploy gateway on others envs qa, stage, prod only if there is no batch running - check it in mongo on batchInstance collection using following query: {"status" : "STARTED"}9. Ask if new source should be added to dq rules"
},
{
"title": "Cache Address ID Clear (Remove Duplicates) Process",
"pageID": "163917838",
"pageLink": "/display/GMDM/Cache+Address+ID+Clear+%28Remove+Duplicates%29+Process",
"content": "This process is similar to the Cache Address ID Update Process . So the user should load the file to mongo and process it with the following steps: Download the files that were indicated by the user and apply on a specific environment (sometimes only STAGE and sometimes all envs)For example - 3 files - /us/prod/inbound/cdw/one-time-feeds/other/Merge these file to one file - Duplicate_Address_Ids_<date>.txtProceed with the script.sh based on the Cache Address ID Update ProcessGenerated Extract load to the removeIdsFromkeyIdRegistry collectionmongoimport --host=localhost:27017 --username=admin --password=zuMMQvMl7vlkZ9XhXGRZWoqM8ux9d08f7BIpoHb --authenticationDatabase=admin --db=reltio_stage --collection=removeIdsFromkeyIdRegistry --type=csv --columnsHaveTypes --fields="_id.string(),key.string(),sequence.string(),generatedId.int64(),_class.string()" --file=EXTRACT_Duplicate_Address_Ids_16042021.txt --mode=insertCLEAR keyIdRegistrydocker exec -it mongo_mongo_1 bashcd /data/configdbNPROD - nohup mongo duplicate_address_ids_clear.js &PROD   - nohup mongo --host mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 -u mdm_hub -p <passw>--authenticationDatabase reltio_prod duplicate_address_ids_clear.js &FOR REFERENCE SCRIPT:\nCLEAR keyIdRegistry\n db = db.getSiblingDB('reltio_dev')\n db.auth("mdm_hub", "<pass>")\n \n db = db.getSiblingDB('reltio_prod')\n db.auth("mdm_hub", "<pass>")\n\n\n\n print("START")\n var start = new Date().getTime();\n\n\n var cursor = db.getCollection("removeIdsFromkeyIdRegistry").aggregate( \n [\n \n ], \n { \n "allowDiskUse" : false\n }\n )\n \n cursor.forEach(function (doc){\n db.getCollection("keyIdRegistry").remove({"_id": doc._id});\n });\n\n var end = new Date().getTime();\n var duration = end - start;\n print("duration: " + duration + " ms")\n print("END")\n\n\n nohup mongo duplicate_address_ids_clear.js &\n\n nohup mongo --host mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 -u mdm_hub -p <pass>--authenticationDatabase reltio_prod duplicate_address_ids_clear.js &\nCLEAR batchEntityProcessStatus checksumsdocker exec -it mongo_mongo_1 bashcd /data/configdbNPROD - nohup mongo unset_checsum_duplicate_address_ids_clear.js &PROD   - nohup mongo --host mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 -u mdm_hub -p <pass>--authenticationDatabase reltio_prod unset_checsum_duplicate_address_ids_clear.js &FOR REFERENCE SCRIPT\nCLEAR batchEntityProcessStatus\n\n db = db.getSiblingDB('reltio_dev')\n db.auth("mdm_hub", "<pass>")\n \n db = db.getSiblingDB('reltio_prod')\n db.auth("mdm_hub", "<pass>")\n\n\n print("START")\n var start = new Date().getTime();\n var cursor = db.getCollection("removeIdsFromkeyIdRegistry").aggregate( \n [\n ], \n { \n "allowDiskUse" : false\n }\n )\n \n cursor.forEach(function (doc){\n var key = doc.key \n var arrVars = key.split("/");\n \n var type = "configuration/sources/"+arrVars[0]\n var value = arrVars[3];\n \n print(type + " " + value)\n \n var result = db.getCollection("batchEntityProcessStatus").update(\n { "batchName" : { $exists : true }, "sourceId" : { "type" : type, "value" : value } },\n { $set: { "checksum": "" } },\n { multi: true}\n )\n \n printjson(result);\n \n });\n \n var end = new Date().getTime();\n var duration = end - start;\n print("duration: " + duration + " ms")\n print("END")\n\n nohup mongo unset_checsum_duplicate_address_ids_clear.js &\n \n nohup mongo --host mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 -u mdm_hub -p <pass>--authenticationDatabase reltio_prod unset_checsum_duplicate_address_ids_clear.js &\nVerify nohup outputCheck few rows and verify if these rows do not exist in the KeyIdRegistry collectionCheck few profiles and verify if the checksum was cleared in the BatchEntityProcessStatus collectionISSUE - for the ONEKEY profiles there is a difference between the generated cache and the corresponding profile.ISSUE - for the GRV profiles there is a difference between the generated cache and the corresponding profile. - check the crosswalks values in COMPANY_ADDRESS_ID_EXTRACT_PAC_files - should be e.g. 00002b9b-f327-456c-959c-fd5b04ed04b8ISSUE - for the ENGAGE 1.0 profiles there is a difference between the generated cache and the corresponding profile.  check the crosswalks values in COMPANY_ADDRESS_ID_EXTRACT_ENG_ files - should be e.g 00002b9b-f327-456c-959c-fd5b04ed04b8Please check the following example:CUST_SYSTEM,CUST_TYPE,SRC_ADDR_ID,SRC_CUST_ID,SRC_CUST_ID_TYPE,PFZ_ADDR_ID,PFZ_CUST_ID,SRC_SYS,MDM_SRC_SYS,EXTRACT_DTPROBLEM : HCPM,HCP,0000407429,8091473,HCE,38357661,1374316,HCPS,HCPS,2021-04-15OK            : HCPM,HCP,a012K000022cqBoQAI,0012K00001lCEyYQAW,HCP,109525669,178336284,VVA,VVA,2021-04-15For VVA the crosswalk is equal to the 001A000001VgOEVIA3 and it is easy to match with the ICUE profile and clear the cache for ONEKEY the generated row is equal to the - COMPANYAddressIDSeq|ONEKEY/HCP/HCE/8091473/0000407429,ONEKEY/HCP/HCE/8091473/0000407429,COMPANYAddressIDSeq,38357661,com.COMPANY.mdm.generator.db.KeyIdRegistryThe 8091473 is not a crosswalk so to remove the checksum from the BatchEntityProcessStatus collection there is a need to find the profile in Reltio - crosswalk si WUSM01113231 - and clear the cache in the BatchEntityProcessStatus collection.In my example, there was only one crosswalk. So it was easy to find this profile. For multiple profiles, there is a need to find the solution. ( I think we need to ask CDW to provide the file for ONEKEY with an additional crosswalk column, so we will be able to match the crosswalk with the Key and clear the checksum)    Solution: once we receive ONEKEY KeyIdRegstriy Update file ask COMPANY Team to generate crosswalks ids - simple CSV fileThe file received from CDW does not contain crosswalks id, only COMPANYAddressIds - example input - https://gblmdmhubprodamrasp101478.s3.amazonaws.com/us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511.txtAsk DT Team and download CSV fileLoad the file to TMP collection in Mongo e.g. - AddressIDCrosswalks_COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511Execute the following:\nCLEAR batchEntityProcessStatus based on crosswalks ID list \n\n db = db.getSiblingDB('reltio_dev')\n db.auth("mdm_hub", "<pass>")\n \n db = db.getSiblingDB('reltio_prod')\n db.auth("mdm_hub", "<pass>")\n\n\n print("START")\n var start = new Date().getTime();\n var cursor = db.getCollection("AddressIDCrosswalks_COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511").aggregate( \n [\n ], \n { \n "allowDiskUse" : false\n }\n )\n \n cursor.forEach(function (doc){\n \n var type = "configuration/sources/ONEKEY";\n var value = doc.COMPANYcustid_individualeid;\n \n print(type + " " + value)\n \n var result = db.getCollection("batchEntityProcessStatus").update(\n { "batchName" : { $exists : true }, "sourceId" : { "type" : type, "value" : value } },\n { $set: { "checksum": "" } },\n { multi: true}\n )\n \n printjson(result);\n \n });\n \n var end = new Date().getTime();\n var duration = end - start;\n print("duration: " + duration + " ms")\n print("END")\n"
},
{
"title": "Changelog of removed duplicates",
"pageID": "172294537",
"pageLink": "/display/GMDM/Changelog+of+removed+duplicates",
"content": "01.02.2021 - DROP keys          Duplicate_Address_Ids.txt         nohup ./script.sh inbound/Duplicate_Address_Ids.txt > EXTRACT_Duplicate_Address_Ids.txt &19.04.2021 - DROP keys STAGE GBLUS          Duplicate_Address_Ids_16042021.txt - 11 380 - 1 ONEKEY, ICUE, CENTRIS          nohup ./script.sh inbound/Duplicate_Address_Ids_16042021.txt > EXTRACT_Duplicate_Address_Ids_16042021.txt &17.05.2021 - DROP STAGE GBLUS          Duplicate_Address_Ids_17052021.txt - 25121 - 1 ONEKEY          nohup ./script.sh inbound/Duplicate_Address_Ids_17052021.txt > EXTRACT_Duplicate_Address_Ids_17052021.txt25.06.2021 - DROP STAGE GBLUS          Duplicate_Address_Ids_17052021.txt - 71509, 2 ONEKEY         nohup ./script.sh inbound/Duplicate_Address_Ids_25062021.txt > EXTRACT_Duplicate_Address_Ids_25062021.txt &12.07.2021 - DROP PROD GBLUS          Duplicate_Address_Ids_12072021.txt - 4550 Duplicate_Address_Ids_12072021.txt - us/prod/inbound/cdw/one-time-feeds/Address-DeDup/FileSet-3/         nohup ./script.sh inbound/Duplicate_Address_Ids_12072021.txt > EXTRACT_Duplicate_Address_Ids_12072021.txt & "
},
{
"title": "Cache Address ID Update Process",
"pageID": "164469955",
"pageLink": "/display/GMDM/Cache+Address+ID+Update+Process",
"content": "1. Log using S3 browser to production bucket gblmdmhubprodamrasp101478 and go to dir /us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/ and check last update dates2. Log using mdmusnpr service user to server amraelp00007334.COMPANY.com using ssh3. Sync files from S3 using below commanddocker run -u 27519996:24670575 -e "AWS_ACCESS_KEY_ID=<access_key>" -e "AWS_SECRET_ACCESS_KEY=<secret_access_key>" -e "AWS_DEFAULT_REGION=us-east-1" -v /app/mdmusnpr/AddressID/inbound:/src:z mesosphere/aws-cli s3 sync s3://gblmdmhubprodamrasp101478/us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/ /src4. After syncing check new files with those two commads replacing new_file_name with name of the file which was updated. Check in script file that SRC_SYS and MDM_SRC_SYS exists, if not something is wrong and probably script needs to be updated ask the person who asked for address id updatecut -d',' -f8 <new_file_name> | sort | uniqcut -d',' -f9 <new_file_name> | sort | uniq5. Remove old extracts from /app/mdmusnpr/AddressIDrm EXTRACT_<new_file_name>6. Run script which will prepare data for mongonohup ./script.sh inbound/<new_file_name> > EXTRACT_<new_file_name> &Wait until processing in foreground finishes. Check after some time using below command:ps ax | grep scriptIf process is marked as done You can continue with next file or if there is no more files You can proceed to next step.7. Log in using Your user to the server amraelp00007334.COMPANY.com and change to root8. Go to /app/mongo/config and remove old extractsrm EXTRACT_<new_file_name>9. Go to /app/mdmusnpr/AddressID and copy new extracts to mongocp EXTRACT_<new_file_name> /app/mongo/config/10. Run mongo shelldocker exec -it mongo_mongo_1 bashcd /data/configdb11. Execute following command for each non prod env and for every new extract file<db_name> - reltio_dev, reltio_qa, reltio_stagemongoimport --host=localhost:27017 --username=admin --password=<db_password> --authenticationDatabase=admin --db=<db_name> --collection=keyIdRegistry --type=csv --columnsHaveTypes --fields="_id.string(),key.string(),sequence.string(),generatedId.int64(),_class.string()" --file=EXTRACT_<new_file_name> --mode=upsertWrite into changelog the number of records that were updated - it should be equal on all envs.12. If needed and requested update production using following commandmongoimport --host=mongo_reltio_repl_set/amraelp00007844.COMPANY.com:27017,amraelp00007870.COMPANY.com:27017,amraelp00007847.COMPANY.com:28017 --username=admin --password=<prod_db_password> --authenticationDatabase=admin --db=reltio_prod --collection=keyIdRegistry --type=csv --columnsHaveTypes --fields="_id.string(),key.string(),sequence.string(),generatedId.int64(),_class.string()" --file=EXTRACT_<new_file_name> --mode=upsert13. Verify number of entries from input file with updated records number in mongo14. Update changelog15. Respond to email that update is done16. Force merge will be generated - there will be mail about this.17. Download force merge delta from S3 using S3 browser and change name to merge_<date>_1.csvbucket: gblmdmhubprodamrasp101478path: us/prod/inbound/HcpmForceMerge/ForceMergeDelta18. Upload file merge_<date>_1.csv tobucket: gblmdmhubprodamrasp101478path: us/prod/inbound/hub/merge_unmerge_entities/input/19. Trigger dag https://mdm-monitoring.COMPANY.com/airflow/tree?dag_id=merge_unmerge_entities_gblus_prod_gblus20. After dag is finished login using S3 Browser bucket: gblmdmhubprodamrasp101478path: us/prod/inbound/hub/merge_unmerge_entities/output/<most_recent_date>_<most_recent_time>so for date 17/5/2021 and time 12:11: 39, the file looks like this:          us/prod/inbound/hub/merge_unmerge_entities/output/20210517_121139and download result file, check for failed merge and send it in response to email about force merge"
},
{
"title": "Changelog of updated",
"pageID": "164469954",
"pageLink": "/display/GMDM/Changelog+of+updated",
"content": "20.11.2020 - Loading NEW files:GRV & ENGAGE 1.0nohup ./script.sh inbound/COMPANY_ADDRESS_ID_EXTRACT_PAC_ENG.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_PAC_ENG.txt &IQVIA_RXnohup ./script.sh inbound/COMPANY_ADDRESS_ID_EXTRACT_HCPS00.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS00.txt &IQVIA_MCO & MILLIMAN & MMITnohup ./script.sh inbound/COMPANY_ACCOUNT_ADDR_ID_EXTRACT.txt > EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT.txt &09.12.2020 - Loading new file: -> 46092714.12.2020 - Loading new file: PAC_ENG -> 820 document, CAPP-> 464583 document16.12.2020 - Loading MILLIMAN_MCO: 10504 document22.12.2020 - Loading CPMRTE: 15686 document, CAPP: 1287, PAC_ENG: 1340, VVA: 11927070, IMS: 343, HCO i SAP problem, CENTRIS: 41496, hcps00: 421529.12.2020 - Loading PAC_ENG: 1260, CAPP: 141404.01.2021 - Loading PAC_ENG: 330, CAPP: 33808.01.2021 - Loading HCPS00: 321411.01.2021 - Loading PAC_ENG: 496, CAPP: 51218.01.2021 - Loading PAC_ENG: 616, CAPP: 79525.01.2021 - Loading PAC_ENG: 1009, CAPP: 93901.02.2021 - Loading PAC_ENG: 884, CAPP: 110608.02.2021 - Loading PAC_ENG: 576, CAPP: 39415.02.2021 - Loading PAC_ENG: 690, CAPP: 69617.02.2021 - Loading VVA: 1204836422.02.2021 - Loading PAC_ENG: 724, CAPP: 75701.03.2021 - Loading PAC_ENG: 906, CAPP: 96926.04.2021 - Loading PAC_ENG: 738, CAPP: 79511.05.2021 - Loading PAC_ENG: 589, CAPP: 62617.05.2021 - Loading PAC_ENG: 489, CAPP: 61317.05.2021 - Loading - us/prod/inbound/cdw/one-time-feeds/COMPANY-address-id/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210511.txt                     Updated: 1171703 - customers updated - cleared cache in batchEntityProcessStatus collection for reload                     Updated: 1513734 - document(s) imported successfully in KeyIdRegistry18.05.2021 - STAGE only      COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210511_fix.txt - 43771 document(s) imported successfully      COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210511.txt - 10076 document(s) imported successfully19.05.3021 -  Load 15 Files to PROD and clear cache. Load these files to DEV QA and STAGE      2972 May 17 11:40 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_DVA_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_DVA_20210511.txt &      19124366 May 19 07:11 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210511_fix.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210511_fix.txt &      3154666 May 17 11:41 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210511.txt &      221969 May 17 11:40 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210511.txt &      214430 May 17 11:41 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MMIT_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MMIT_20210511.txt &      163142 May 17 11:40 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_SAP_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_SAP_20210511.txt &      73236 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210511.txt &      6399709 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210511.txt &      60175 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210511.txt &      318915 May 17 14:59 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_ENG_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_ENG_20210511.txt &      13528 May 17 14:59 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210511.txt &      1360570 May 17 14:59 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_KOL_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_KOL_20210511.txt &      8135990 May 17 14:59 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_PAC_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_PAC_20210511.txt &      14583373 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_SHS_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_20210511.txt &      283564 May 17 15:00 nohup ./script.sh inbound/Prod_Sync_FileSet/COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210511.txt > Prod_Sync_FileSet/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210511.txt &24.05.2021 - Loading PAC_ENG: Dev:1283, QA: 1283, Stage: 1509, Prod: 1283                                         CAPP: Dev: 1873, QA: 1392, Stage: 1873, Prod: 18731/6/2021 - Loading PAC_ENG: 379, CAPP: 4339/6/2021 - Loading PAC_ENG: 38, CAPP: 4714/6/2021 - Loading PAC_ENG: 83, CAPP: 10216/6/2021 - Loading COMPANY_ACCT: Prod: 236 28/06/2021 - Loading PAC_ENG: Dev:182, QA: 182, Stage: 182, Prod: 646, CAPP: Dev: 215, QA: 215, Stage: 215, Prod: 21502.07.2021     Load 11 Files to PROD and clear cache. Load these files to DEV QA and STAGE     nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_HCOS_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_IMS_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ACCOUNT_ADDR_ID_EXTRACT_MLM_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_APUS-VVA_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_CENTRIS_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_EMDS-VVA_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_HCPS_ZIP_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_KOL_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_KOL_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_SHS_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_20210630.txt &    nohup ./script.sh inbound/Prod_Sync_FileSet_3/COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210630.txt > Prod_Sync_FileSet_3/EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_SHS_ZIP_20210630.txt &5/7/2021 - Loading PAC_ENG: 39 , CAPP: 4416.07.2021     Load 1 VVA File to PROD and clear cache. Load this file to DEV QA and STAGE     nohup ./script.sh inbound/COMPANY_ADDRESS_ID_EXTRACT_VVA_20210715.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_VVA_20210715.txt &20.07.2021     Load 1 VVA File to PROD and clear cache. Load this file to DEV QA and STAGE     nohup ./script.sh inbound/COMPANY_ADDRESS_ID_EXTRACT_VVA_20210718.txt > EXTRACT_COMPANY_ADDRESS_ID_EXTRACT_VVA_20210718.txt &GBLUS/Fletcher PROD GO-LIVE COMPANYAddressID sequence - PROD (MAX)139510034 + 5000000 = 144510034"
},
{
"title": "Manual Cache Clear",
"pageID": "164470086",
"pageLink": "/display/GMDM/Manual+Cache+Clear",
"content": "Open Studio 3T and connect to appropriate Mongo DBOpen IntelliShellRun following query for appropriate source - replace <source> with right name\ndb.getCollection("batchEntityProcessStatus").updateMany({"sourceId.type":"configuration/sources/<source>"}, {$set: {"checksum" : ""}})\n"
},
{
"title": "Data Quality",
"pageID": "492471763",
"pageLink": "/display/GMDM/Data+Quality",
"content": ""
},
{
"title": "Quality Rules Deployment Process",
"pageID": "492471766",
"pageLink": "/display/GMDM/Quality+Rules+Deployment+Process",
"content": "Resource changingThe process regards modifying the resources related to data quality configuration that are stored in Consul and load by mdm-manager, mdm-onekey-dcr-service, precallback-service components in runtime. They are present in mdm-config-registry/config-hub location.When modifying data quality rules configuration present at mdm-config-registry/config-hub/<env_name>/mdm-manager/quality-service/quality-rules , the following rules should be applied:Each YAML file should be formatted in accordance with yamllint rules (See Yamllint validation rules)The attributes createdDate/modifiedDate were deleted from the rules configuration files. They will be automatically set for each rule during the deployment process. (See Deployment of changes)Adding more than one rule with the same value of name attribute is not allowed.PR validationEvery PR to mdm-config-registry repository is validated for correctness of YAML syntax (See Yamllint validation rules). Upon PR creation the job is triggered that checks the format of YAML files using yamllint. The jobs succeeds only when all the yaml files in repository passed the yamllint test.The PRs that did not passed validations should not be merged to master.Deployment of changesAll changes in mdm-config-registry/config-hub should be deployed to consul using JENKINS JOBS. The separate job exist for deploying changes done on each environment. Eg. job deploy_config_amer_nprod_amer-dev is used to deploy all changes done on AMER DEV environment (all changes under path mdm-config-registry/config/hub/dev_amer). Jobs allow to deploy configuration from master branch or PR's to mdm-config-registry repo.The deployment job flow can be described by the following diagram:StepsClean workspace - wipes workspace of all the files left from previous job run.Checkout mdm-config-registry - this repository contains files with data quality configuration and yamllint rulesCheckout mdm-hub-cluster-env - this repository contains script for assigning createdDate / modifiedDate attributes to quality rules and ansible job for running this script and uploading files to consul.Validate yaml files - runs yamllint validation for every YAML file at mdm-config-registry/config-hub/<env_name> (See Yamllint validation rules)Get previous quality rules registry files - downloads quality rules registry file produced after previous successfull run of a job. The file is responsible for storing information about modification dates and checksum of quality rules. Decision if modification dates should be update is made based on checksum change, . The registry file is a csv with the following headers:ID - ID for each quality rule in form of <file_name>:<rule_name>CREATED_DATE - stores createdDate attribute value for each ruleMODIFIED_DATE - stores modifiedDate attribute value for each ruleCHECKSUM - stores checksum counted for each ruleUpdate Quality Rules files - runs ansible job responsible for:Running script QualityRuleDatesManager.groovy - responsible for adjusting createdDate / modifiedDate for quality rules based on checksum changes and creating new quality rules registry file.Updating changed quality rules files in Consul kv store.Archive quality rules registry file - save new registry file in job artifacts.Algorithm of updating modification datesThe following algorithm is implemented in QualityRuleDatesManager.groovy script. The main goal of this is to update createdDate/modifiedDate in the case when new quality rule has been added or its definition changed.Yamllint validation rulesTODO"
},
{
"title": "DCRs:",
"pageID": "259432965",
"pageLink": "/pages/viewpage.action?pageId=259432965",
"content": ""
},
{
"title": "DCR Service 2:",
"pageID": "302705607",
"pageLink": "/pages/viewpage.action?pageId=302705607",
"content": ""
},
{
"title": "Reject pending VOD DCR - transfer to Data Stewards",
"pageID": "415993922",
"pageLink": "/display/GMDM/Reject+pending+VOD+DCR+-+transfer+to+Data+Stewards",
"content": "DescriptionThere's a DCR request which was sent to Veeva OpenData (VOD) by HUB however it hasn't been processed - we didn't receive information whether is should be ACCEPTED or REJECTED. This causes a couple of things:in RELTIO we're having DCR in status VR Status = OPEN and VR Detailed Status = SENTin Mongo in collection DCRRequest we're having DCR in status = SENT_TO_VEEVAin Mongo in collection DCRVeevaRequest we're having DCR in status = SENTalerts are raised in Prometheus/Karma since we usually should receive response within couple of daysGoalWe want to simulate REJECT response from VOD which will make DCR to return to Reltio for further processing by Data Stewards. This may be realized in a couple of ways: Procedure #1 - (minutes to process) Populate event to topic $env-internal-veeva-dcr-change-events-in which skips VeevaAdapter and simulates response from VeevaAdapter to DCR Service 2 → see diagram for more details Veeva DCR flowsProcedure #2 - (hours to process) Create DCR response ZIP file with specific payload, which needs to be placed to specific S3 location, which is further ingested by VeevaAdapterProcedure #1Step 1 - Adjust below event template(optional) update eventTime to current timestamp in milliseconds → use https://www.epochconverter.com/(optional) update countryCode to the on from Request(requited) update dcrId to the one you want JSON event to populate\n{\n "eventType": "CHANGE_REJECTED",\n "eventTime": 1712573721000,\n "countryCode": "SG",\n "dcrId": "a51f229331b14800846503600c787083",\n "vrDetails": {\n "vrStatus": "CLOSED",\n "vrStatusDetail": "REJECTED",\n "veevaComment": "MDM HUB: Simulated reject response to close DCR.",\n "veevaHCPIds": [],\n "veevaHCOIds": []\n }\n}\nStep 2 - Populate event to topic $env-internal-veeva-dcr-change-events-in (for APAC-STAGE: apac-stage-internal-veeva-dcr-change-events-in). For this purpose use AKHQ (for APAC-STAGE: https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/login)Select topic $env-internal-veeva-dcr-change-events-in and use "Produce to Topic" button in bottom rightPaste event details, update Key by providing dcrId and press "Populate"After a couple of minutes two things should be in effect:DCR in Reltio should change its status from SENT_TO_VEEVA to DS Action RequiredMongoDB document in collection DCRRegistry will change its status to DS_ACTION_REQUIREDStep 3 - update MongoDB DCRRegistryVeeva collection Connect to Mongo with Studio 3T, find out document using "_id" in collection DCRRegistryVeeva and update its status to REJECTED and changeDate to current one.Document update\n{\n $set : {\n "status.name" : "REJECTED",\n "status.changeDate" : "2024-04-07T17:42:37.882195Z"\n }\n}\nStep 4 - check Reltio DCRCheck if DCR status has changed to "DS Action Required" and DCR Tracing details has been updated with simulated Veeva Reject response. "
},
{
"title": "Close VOD DCR - override any status",
"pageID": "492489948",
"pageLink": "/display/GMDM/Close+VOD+DCR+-+override+any+status",
"content": "This SoP is almost identical to the one in Override VOD Accept to VOD Reject for VOD DCR with small updates:In Step 1, please also update target = VOD to target = Reltio. "
},
{
"title": "Override VOD Accept to VOD Reject for VOD DCR",
"pageID": "490649621",
"pageLink": "/display/GMDM/Override+VOD+Accept+to+VOD+Reject+for+VOD+DCR",
"content": "DescriptionThere's a DCR request which was sent to Veeva OpenData (VOD) and mistakenly ACCEPTED, however business requires such DCR to be Rejected and redirected to DSR for processing via Reltio Inbox.GoalWe want to:remove incorrect entries in DCR Tracking details - usually "Veeva Accepted" and "Waiting for ETL Data Load"simulate REJECT response from VOD which will make DCR to return to Reltio for further processing by Data Stewards→ Populate event to topic $env-internal-veeva-dcr-change-events-in which skips VeevaAdapter and simulates response from VeevaAdapter to DCR Service 2 → see diagram for more details Veeva DCR flowsProcedureStep 0 - Assume that VOD_NOT_FOUNDSet retryCounter to 9999Wait for 12hStep 1 - Adjust DCR document in MongoDB in DCRRegistry collection (Studio3T)Remove incorrect DCR Tracking entries for your DCR (trackingDetails section) - usually nested attribute 3 and 4 in this sectionSet retryCounter to 0Set status.name to "SENT_TO_VEEVA"Step 2 - update MongoDB DCRRegistryVeeva collection Connect to Mongo with Studio 3T, find out document using "_id" in collection DCRRegistryVeeva and update its status to REJECTED and changeDate to current one.Document update\n{\n $set : {\n "status.name" : "REJECTED",\n "status.changeDate" : "2024-04-07T17:42:37.882195Z"\n }\n}\nStep 3 - Adjust below event template(optional) update eventTime to current timestamp in milliseconds → use https://www.epochconverter.com/(optional) update countryCode to the on from Request(requited) update dcrId to the one you want JSON event to populate\n{\n "eventType": "CHANGE_REJECTED",\n "eventTime": 1712573721000,\n "countryCode": "SG",\n "dcrId": "a51f229331b14800846503600c787083",\n "vrDetails": {\n "vrStatus": "CLOSED",\n "vrStatusDetail": "REJECTED",\n "veevaComment": "MDM HUB: Simulated reject response to close DCR.",\n "veevaHCPIds": [],\n "veevaHCOIds": []\n }\n}\nStep 4 - Populate event to topic $env-internal-veeva-dcr-change-events-in (for APAC-STAGE: apac-stage-internal-veeva-dcr-change-events-in). For this purpose use AKHQ (for APAC-STAGE: https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/login)Select topic $env-internal-veeva-dcr-change-events-in and use "Produce to Topic" button in bottom rightPaste event details, update Key by providing dcrId and press "Populate"After a couple of minutes (it depends on the traceVR schedule - it my take up to 6h on PROD) two things should be in effect:DCR in Reltio should change its status from SENT_TO_VEEVA to DS Action RequiredMongoDB document in collection DCRRegistry will change its status to DS_ACTION_REQUIREDStep 6 - check Reltio DCRCheck if DCR status has changed to "DS Action Required" and DCR Tracing details has been updated with simulated Veeva Reject response. "
},
{
"title": "DCR escalation to Veeva Open Data (VOD)",
"pageID": "430348063",
"pageLink": "/pages/viewpage.action?pageId=430348063",
"content": "Integration failIt occasionally happens that DCR response files from Veeva are not being delivered to S3 bucket which is used for ingestion by HUB. VOD provides CVS/ZIP files every day, even though there's no actual payload related to DCRs - files contain only CSV headers. This disruption may be caused by two things: VOD didn't generate DCR response and didn't place it on their SFTPGMFT's synchronization job responsible for moving file between SFTP and S3 stopped working Either way, we need to pin point of the two are causing the problem.Troubleshooting It's usually good to check when the last synchronization took place.GMFT issueIf there is more than one file (usually this dir should be empty) in outbound directory /globalmdmprodaspasp202202171415/apac/prod/outbound/vod/APAC/DCR_request it means that GMFT job does not push files from S3 to SFTP. The files which are properly processed by GMFT job are copied to Veeva SFTP and additionally moved to  /globalmdmprodaspasp202202171415/apac/prod/archive/vod/APAC/DCR_request.Veeva Open Data issueOnce you are sure it's not GMFT issue, check archive directory for the latest DCR response file: /globalmdmprodaspasp202202171415/apac/prod/archive/vod/APAC/DCR_response/globalmdmprodaspasp202202171415/apac/prod/archive/vod/CN/DCR_responseIf the latest file is older that 24h → there's an issue on VOD side. Who to contact?SFTP, please contact DL-GMFT-EDI-PRD-SUPPORT@COMPANY.com or directly to barath.s@COMPANY.com, kothai.nayaki@COMPANY.com and CC: sabari.mahendran@COMPANY.comVeeva Open data(important one) create ticket in smartsheet: https://app.smartsheet.com/sheets/pqmwRfRjCxRRCXgwRJf2629fGqrjfFpQ6fWPjfM1 → you may not have access to this file without prior request to moneem.ahmed@veeva.comat the moment Irek has access to this file(optional) please contact laurie.koudstaal@COMPANY.com, quiterie.duco@veeva.com, (and for escalation and PROD issues CC: vincent.pavan@veeva.com, moneem.ahmed@veeva.com and sabari.mahendran@COMPANY.com)"
},
{
"title": "DCR rejects from IQVIA due to missing RDM codes",
"pageID": "475927691",
"pageLink": "/display/GMDM/DCR+rejects+from+IQVIA+due+to+missing+RDM+codes",
"content": "DescriptionSometimes our Clients are being provided with below error message when they are trying to send DCRs to OneKey. This request was not accepted by the IQVIA due to missing RDM code mapping and was redirected to Reltio Inbox. The reason is: 'Target lookup code not found for attribute: HCPSpecialty, country: CA, source value: SP.ONCM.'. This means that there is no equivalent of this code in IQVIA code mapping. Please contact MDM Hub DL-ATP_MDMHUB_SUPPORT@COMPANY.com asking to add this code and click "SendTo3Party" in Reltio after Hub's confirmation.WhyThis is caused when PforceRx tries to send DCR with changes on attribute with Lookup Values. On HUB end we're trying to remap canonical codes from Reltio/RDM to source mapping values which are specific to OneKey and understood by them. Usual we are dealing with situation that for each canonical code there is a proper source code mapping mapping. Please refer to below screen (Mongo collection LookupValues). However when their is no such mapping like in case below (no ONEKEY entry in sourceMappings) then we're dealing with problem aboveFor more information about canonical code mapping and the flow to get target code sent to OneKey or VOD, please refer to → Veeva: create DCR method (storeVR), section "Mapping Reltio canonical codes → Veeva source codes"HowWe should contact people responsible for RDM codes mappings (MDM COMPANY team) to add find out correct sourceMapping value for this specific canonical code for specific country. In the end they will contact AJ to add it to RDM (usually every week)."
},
{
"title": "Defaults",
"pageID": "284795409",
"pageLink": "/display/GMDM/Defaults",
"content": "DCR defaults map the source codes of the Reltio system to the codes in the OneKey or VOD (Veeva Open Data) system. Occur for specific types of attributes: HCPSpecialities, HCOSpecialities, HCPTypeCode, HCOTypeCode, HCPTitle, HCOFacilityType. The values are configured in the Consul system. To configure the values:  Sort the source (.xlsx) file: Divide the file into separate sheets for each attribute.Save the sheets in separate csv format files - columns separated by semicolons.Paste the contents of the files into the appropriate files in the consul configuration repository - mdm-config-registry:  - each environment has its own folder in the configuration repository  - files must have header- Country;CanonicalCode;DefaultFor more information about canonical code mapping and the flow to get target code sent to OneKey or VOD, please refer to → Veeva: create DCR method (storeVR), section "Mapping Reltio canonical codes → Veeva source codes""
},
{
"title": "Go-Live Readiness",
"pageID": "273696220",
"pageLink": "/display/GMDM/Go-Live+Readiness",
"content": "Procedure:"
},
{
"title": "OneKey Crosswalk is Missing and IQVIA Returned Wrong ID in TraceVR Response",
"pageID": "259432967",
"pageLink": "/display/GMDM/OneKey+Crosswalk+is+Missing+and+IQVIA+Returned+Wrong+ID+in+TraceVR+Response",
"content": "This SOP describes how to FIX the case when there is a DCR in OK_NOT_FOUND status and IQVIA change  the individualID from wrong one to correct one (due to human error)Example Case based on EMEA PROD: there is a DCR - 1fced0be830540a89c30f5d374754accstatus is OK_NOT_FOUNDmessage is Received ACCEPTED status from IQVIA, waiting for ONEKEY data load, missing crosswalks: WUKM00110951retrycounter reach 14 (7days)IQVIAshared the following trace VR response at firs and we closed the DCR:{"response.traceValidationRequestOutputFormatVersion":"1.8","response.status":"SUCCESS","response.resultSize":1,"response.totalNumberOfResults":1,"response.success":true,"response.results":[{"codBase":"WUK","cisHostNum":"4606","userEid":"04606","requestType":"Q","responseEntityType":"ENT_ACTIVITY","clientRequestId":"1fced0be830540a89c30f5d374754acc","cegedimRequestEid":"fbf706e175c847cb8f39a1873fc4daaf","customerRequest":null,"trace1ClientRequestDate":"2022-07-22T14:53:32Z","trace2CegedimOkcProcessDate":"2022-07-22T14:53:31Z","trace3CegedimOkeTransferDate":"2022-07-22T14:54:02Z","trace4CegedimOkeIntegrationDate":"2022-07-22T14:54:32Z","trace5CegedimDboResponseDate":"2022-07-28T07:27:34Z","trace6CegedimOkcExportDate":null,"requestComment":"FY1 Dr working in the stroke care unit at St Johns Hospital Livingston","responseComment":"HCP works at St Johns Hospital","individualEidSource":null,"individualEidValidated":"WUKM00110951","workplaceEidSource":"WUKH07885517","workplaceEidValidated":"WUKH07885517","activityEidSource":null,"activityEidValidated":"WUKM0011095101","addressEidSource":null,"addressEidValidated":"WUK00000092143","countryEid":"GB","processStatus":"REQUEST_RESPONDED","requestStatus":"VAS_FOUND","updateDate":"2022-07-28T07:56:45Z"}]}People involved in this topic:On Reltio side:On IQVIA side: After IQVIA check the TraceVR changed to:"response":{"traceValidationRequestOutputFormatVersion":1.8,"success":true,"status":"SUCCESS","totalNumberOfResults":1,"resultSize":1,"results":[{"activityEidSource":null,"activityEidValidated":"WUKM0011095501","addressEidSource":null,"addressEidValidated":"WUK00000092143","cegedimRequestEid":"fbf706e175c847cb8f39a1873fc4daaf","cisHostNum":"4606","clientRequestId":"1fced0be830540a89c30f5d374754acc","codBase":"WUK","countryEid":"GB","customerRequest":null,"individualEidSource":null,"individualEidValidated":"WUKM00110955","processStatus":"REQUEST_RESPONDED","requestComment":"FY1 Dr working in the stroke care unit at St Johns Hospital Livingston","requestEntityType":"ENT_ACTIVITY","requestFirstname":"Beth","requestLastname":"Mulloy","requestOrigin":"WS","requestProcess":"I","requestStatus":"VAS_FOUND","requestType":"Q","requestUsualWkpName":"Care of the Elderly Department","responseComment":"HCP works at St Johns Hospital","responseEntityType":"ENT_ACTIVITY","trace1ClientRequestDate":"2022-07-22T14:53:32Z","trace2CegedimOkcProcessDate":"2022-07-22T14:53:31Z","trace3CegedimOkeTransferDate":"2022-07-22T14:54:02Z","trace4CegedimOkeIntegrationDate":"2022-07-22T14:54:32Z","trace5CegedimDboResponseDate":"2022-07-28T07:27:34Z","trace6CegedimOkcExportDate":null,"lastResponseDate":"2022-07-28T07:43:40Z","updateDate":"2022-07-28T08:01:40Z","workplaceEidSource":"WUKH07885517","workplaceEidValidated":"WUKH07885517","userEid":"04606"}}the WUKM00110951 was changed to WUKM00110955This is blocking the DCRThe event that is constantly processing each 12h is in the emea-prod-internal-onekey-dcr-change-events-in The event was already generated so we need to overwrite it to fix the processingSTEPS:Go to https://akhq-emea-prod-gbl-mdm-hub.COMPANY.com/emea-prod-mdm-kafka/topic?search=dcr&show=HIDE_INTERNALFind the DCR by _id and get the latest event:Change the BodyFROM\n{\n "eventType": "DCR_CHANGED",\n "eventTime": 1658995201031,\n "eventPublishingTime": 1658995201031,\n "countryCode": "GB",\n "dcrId": "1fced0be830540a89c30f5d374754acc",\n "targetChangeRequest": {\n "vrStatus": "CLOSED",\n "vrStatusDetail": "ACCEPTED",\n "oneKeyComment": "ONEKEY response comment: HCP works at St Johns Hospital\\nONEKEY HCP ID: WUKM00110951\\nONEKEY HCO ID: WUKH07885517",\n "individualEidValidated": "WUKM00110951",\n "workplaceEidValidated": "WUKH07885517",\n "vrTraceRequest": "{\\"isoCod2\\":\\"GB\\",\\"validation.clientRequestId\\":\\"1fced0be830540a89c30f5d374754acc\\"}",\n "vrTraceResponse": "{\\"response.traceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"response.status\\":\\"SUCCESS\\",\\"response.resultSize\\":1,\\"response.totalNumberOfResults\\":1,\\"response.success\\":true,\\"response.results\\":[{\\"codBase\\":\\"WUK\\",\\"cisHostNum\\":\\"4606\\",\\"userEid\\":\\"04606\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"1fced0be830540a89c30f5d374754acc\\",\\"cegedimRequestEid\\":\\"fbf706e175c847cb8f39a1873fc4daaf\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"2022-07-22T14:53:32Z\\",\\"trace2CegedimOkcProcessDate\\":\\"2022-07-22T14:53:31Z\\",\\"trace3CegedimOkeTransferDate\\":\\"2022-07-22T14:54:02Z\\",\\"trace4CegedimOkeIntegrationDate\\":\\"2022-07-22T14:54:32Z\\",\\"trace5CegedimDboResponseDate\\":\\"2022-07-28T07:27:34Z\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":\\"FY1 Dr working in the stroke care unit at St Johns Hospital Livingston\\",\\"responseComment\\":\\"HCP works at St Johns Hospital\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":\\"WUKM00110951\\",\\"workplaceEidSource\\":\\"WUKH07885517\\",\\"workplaceEidValidated\\":\\"WUKH07885517\\",\\"activityEidSource\\":null,\\"activityEidValidated\\":\\"WUKM0011095101\\",\\"addressEidSource\\":null,\\"addressEidValidated\\":\\"WUK00000092143\\",\\"countryEid\\":\\"GB\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_FOUND\\",\\"updateDate\\":\\"2022-07-28T07:56:45Z\\"}]}"\n }\n}\nTO\n{\n "eventType": "DCR_CHANGED",\n "eventTime": 1658995201031,\n "eventPublishingTime": 1658995201031,\n "countryCode": "GB",\n "dcrId": "1fced0be830540a89c30f5d374754acc",\n "targetChangeRequest": {\n "vrStatus": "CLOSED",\n "vrStatusDetail": "ACCEPTED",\n "oneKeyComment": "ONEKEY response comment: HCP works at St Johns Hospital\\nONEKEY HCP ID: WUKM00110955\\nONEKEY HCO ID: WUKH07885517",\n "individualEidValidated": "WUKM00110955",\n "workplaceEidValidated": "WUKH07885517",\n "vrTraceRequest": "{\\"isoCod2\\":\\"GB\\",\\"validation.clientRequestId\\":\\"1fced0be830540a89c30f5d374754acc\\"}",\n "vrTraceResponse": "{\\"response.traceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"response.status\\":\\"SUCCESS\\",\\"response.resultSize\\":1,\\"response.totalNumberOfResults\\":1,\\"response.success\\":true,\\"response.results\\":[{\\"codBase\\":\\"WUK\\",\\"cisHostNum\\":\\"4606\\",\\"userEid\\":\\"04606\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"1fced0be830540a89c30f5d374754acc\\",\\"cegedimRequestEid\\":\\"fbf706e175c847cb8f39a1873fc4daaf\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"2022-07-22T14:53:32Z\\",\\"trace2CegedimOkcProcessDate\\":\\"2022-07-22T14:53:31Z\\",\\"trace3CegedimOkeTransferDate\\":\\"2022-07-22T14:54:02Z\\",\\"trace4CegedimOkeIntegrationDate\\":\\"2022-07-22T14:54:32Z\\",\\"trace5CegedimDboResponseDate\\":\\"2022-07-28T07:27:34Z\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":\\"FY1 Dr working in the stroke care unit at St Johns Hospital Livingston\\",\\"responseComment\\":\\"HCP works at St Johns Hospital\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":\\"WUKM00110955\\",\\"workplaceEidSource\\":\\"WUKH07885517\\",\\"workplaceEidValidated\\":\\"WUKH07885517\\",\\"activityEidSource\\":null,\\"activityEidValidated\\":\\"WUKM0011095501\\",\\"addressEidSource\\":null,\\"addressEidValidated\\":\\"WUK00000092143\\",\\"countryEid\\":\\"GB\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_FOUND\\",\\"updateDate\\":\\"2022-07-28T07:56:45Z\\"}]}"\n }\n}\nThe result is the replace in the individualEidValidated and all the places where ol ID existsPush the new event with new timestamp and same kafka key to the topicNew Case (2023-03-21)ONEKEY responded with ACCEPTED with ONEKEY ID but OneKey VR Trace response contains: "requestStatus": "VAS_FOUND_BUT_INVALID".DCR2 Service is checking every 12h if Onekey already provided the data to Reltio. We must manually close this DCR.Steps:In amer-prod-internal-onekey-dcr-change-events-in topic find the latest event for ID ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●.Change from:\n{\n\t"eventType": "DCR_CHANGED",\n\t"eventTime": 1677801600678,\n\t"eventPublishingTime": 1677801600678,\n\t"countryCode": "CA",\n\t"dcrId": "f19305a6e6af4b5aa03d26c1ec1ae5a6",\n\t"targetChangeRequest": {\n\t\t"vrStatus": "CLOSED",\n\t\t"vrStatusDetail": "ACCEPTED",\n\t\t"oneKeyComment": "ONEKEY response comment: Already Exists-Data Privacy\\nONEKEY HCP ID: WCAP00028176\\nONEKEY HCO ID: WCAH00052991",\n\t\t"individualEidValidated": "WCAP00028176",\n\t\t"workplaceEidValidated": "WCAH00052991",\n\t\t"vrTraceRequest": "{\\"isoCod2\\":\\"CA\\",\\"validation.clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\"}",\n\t\t"vrTraceResponse": "{\\"response.traceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"response.status\\":\\"SUCCESS\\",\\"response.resultSize\\":1,\\"response.totalNumberOfResults\\":1,\\"response.success\\":true,\\"response.results\\":[{\\"codBase\\":\\"WCA\\",\\"cisHostNum\\":\\"7853\\",\\"userEid\\":\\"07853\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\",\\"cegedimRequestEid\\":\\"9d02f7547dbc4e659a9d230c91f96279\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"2023-02-27T23:53:44Z\\",\\"trace2CegedimOkcProcessDate\\":\\"2023-02-27T23:53:40Z\\",\\"trace3CegedimOkeTransferDate\\":\\"2023-02-27T23:54:23Z\\",\\"trace4CegedimOkeIntegrationDate\\":\\"2023-02-27T23:55:47Z\\",\\"trace5CegedimDboResponseDate\\":\\"2023-03-02T21:23:36Z\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":null,\\"responseComment\\":\\"Already Exists-Data Privacy\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":\\"WCAP00028176\\",\\"workplaceEidSource\\":\\"WCAH00052991\\",\\"workplaceEidValidated\\":\\"WCAH00052991\\",\\"activityEidSource\\":null,\\"activityEidValidated\\":\\"WCAP0002817602\\",\\"addressEidSource\\":null,\\"addressEidValidated\\":\\"WCA00000006206\\",\\"countryEid\\":\\"CA\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_FOUND_BUT_INVALID\\",\\"updateDate\\":\\"2023-03-02T21:37:16Z\\"}]}"\n\t}\n}\nTo:\n{\n\t"eventType": "DCR_CHANGED",\n\t"eventTime": 1677801600678,\n\t"eventPublishingTime": 1677801600678,\n\t"countryCode": "CA",\n\t"dcrId": "f19305a6e6af4b5aa03d26c1ec1ae5a6",\n\t"targetChangeRequest": {\n\t\t"vrStatus": "CLOSED",\n\t\t"vrStatusDetail": "REJECTED",\n\t\t"oneKeyComment": "ONEKEY response comment: Already Exists-Data Privacy\\nONEKEY HCP ID: WCAP00028176\\nONEKEY HCO ID: WCAH00052991",\n\t\t"individualEidValidated": "WCAP00028176",\n\t\t"workplaceEidValidated": "WCAH00052991",\n\t\t"vrTraceRequest": "{\\"isoCod2\\":\\"CA\\",\\"validation.clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\"}",\n\t\t"vrTraceResponse": "{\\"response.traceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"response.status\\":\\"SUCCESS\\",\\"response.resultSize\\":1,\\"response.totalNumberOfResults\\":1,\\"response.success\\":true,\\"response.results\\":[{\\"codBase\\":\\"WCA\\",\\"cisHostNum\\":\\"7853\\",\\"userEid\\":\\"07853\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"f19305a6e6af4b5aa03d26c1ec1ae5a6\\",\\"cegedimRequestEid\\":\\"9d02f7547dbc4e659a9d230c91f96279\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"2023-02-27T23:53:44Z\\",\\"trace2CegedimOkcProcessDate\\":\\"2023-02-27T23:53:40Z\\",\\"trace3CegedimOkeTransferDate\\":\\"2023-02-27T23:54:23Z\\",\\"trace4CegedimOkeIntegrationDate\\":\\"2023-02-27T23:55:47Z\\",\\"trace5CegedimDboResponseDate\\":\\"2023-03-02T21:23:36Z\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":null,\\"responseComment\\":\\"Already Exists-Data Privacy\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":\\"WCAP00028176\\",\\"workplaceEidSource\\":\\"WCAH00052991\\",\\"workplaceEidValidated\\":\\"WCAH00052991\\",\\"activityEidSource\\":null,\\"activityEidValidated\\":\\"WCAP0002817602\\",\\"addressEidSource\\":null,\\"addressEidValidated\\":\\"WCA00000006206\\",\\"countryEid\\":\\"CA\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_FOUND_BUT_INVALID\\",\\"updateDate\\":\\"2023-03-02T21:37:16Z\\"}]}"\n\t}\n}\nand post back to the topic. DCR will be closed in 24h.New Case (2024-03-19)We need to force close/reject a couple of DCRs which cannot closed themselves. There were sent to OneKey, but for some reasons OK does not recognize them.  IQVIA have not generated the TraceVR response and we need to simulate it.  To break TRACEVR process for this DCRs we need to manually change the Mongo Status to REJECTED. If we keep SENT we are going to ask IQVIA forever in - TODO - describe this in SOPOpen Mongo and update DCRRegistryONEKEY for selected profiles. Change status to { "status.name" : "REJECTED" } Change details to "HUB manual update due to <ticket number MR>"Change from:To: Find the latest event for the chosen id and generate the event in the topic "<env>-internal-onekey-dcr-change-events-in" which will change their status\n "vrStatus": "CLOSED",\n "vrStatusDetail": "REJECTED", \n\n {\n "eventType": "DCR_CHANGED",\n "eventTime": <current_time>,\n "eventPublishingTime": <current_time>,\n "countryCode": "<country>",\n "dcrId": "<dcr_id>",\n "targetChangeRequest": {\n "vrStatus": "CLOSED",\n "vrStatusDetail": "REJECTED",\n "oneKeyComment": "HUB manual update due to MR-<ticket_number>",\n "individualEidValidated": null,\n "workplaceEidValidated": null,\n "vrTraceRequest": "{\\"isoCod2\\":\\"<country>\\",\\"validation.clientRequestId\\":\\"<dcr_id>\\"}",\n "vrTraceResponse": "{\\"response.traceValidationRequestOutputFormatVersion\\":\\"1.8\\",\\"response.status\\":\\"SUCCESS\\",\\"response.resultSize\\":1,\\"response.totalNumberOfResults\\":1,\\"response.success\\":true,\\"response.results\\":[{\\"codBase\\":\\"W<country>\\",\\"cisHostNum\\":\\"4605\\",\\"userEid\\":\\"HUB\\",\\"requestType\\":\\"Q\\",\\"responseEntityType\\":\\"ENT_ACTIVITY\\",\\"clientRequestId\\":\\"<dcr_id>\\",\\"cegedimRequestEid\\":\\"\\",\\"customerRequest\\":null,\\"trace1ClientRequestDate\\":\\"2024-02-27T09:29:34Z\\",\\"trace2CegedimOkcProcessDate\\":\\"2024-02-27T09:29:34Z\\",\\"trace3CegedimOkeTransferDate\\":\\"2024-02-27T09:32:22Z\\",\\"trace4CegedimOkeIntegrationDate\\":\\"2024-02-27T09:29:48Z\\",\\"trace5CegedimDboResponseDate\\":\\"2024-03-04T14:51:54Z\\",\\"trace6CegedimOkcExportDate\\":null,\\"requestComment\\":\\"\\",\\"responseComment\\":\\"HUB manual update due to MR-<ticket_number>\\",\\"individualEidSource\\":null,\\"individualEidValidated\\":null,\\"workplaceEidSource\\":null,\\"workplaceEidValidated\\":null,\\"activityEidSource\\":null,\\"activityEidValidated\\":null,\\"addressEidSource\\":null,\\"addressEidValidated\\":null,\\"countryEid\\":\\"<country>\\",\\"processStatus\\":\\"REQUEST_RESPONDED\\",\\"requestStatus\\":\\"VAS_NOT_FOUND\\",\\"updateDate\\":\\"2024-03-04T16:06:29Z\\"}]}"\n }\n}\n"
},
{
"title": "CHANGELOG",
"pageID": "411338079",
"pageLink": "/display/GMDM/CHANGELOG",
"content": "List of DCRs:VR-00952672 = 163f209d24d94ea99bd7b47d9108366cVR-00952674 = dbd44964afba4bab84d50669b1ccbac3VR-00968353 = 07c363c5d3364090a2c0f6fdbbbca1ddRe COMPANY RE IM44066249 VR missing FR.msg"
},
{
"title": "Update DCRs with missing comments",
"pageID": "425495306",
"pageLink": "/display/GMDM/Update+DCRs+with+missing+comments",
"content": "DescriptionDue to temporary problem with our calls to Reltio workflow API we had multiple DCRs with missing workflow comments. The symptoms of this error were: no changeRequestComment field in DCRRegistry mongo collection and lack of content in Comment field in Reltio while viewing DCR by entityUrl.We have created a solution allowing to find deficient DCRs and update their comments in database and Reltio.GoalWe want to find all deficient DCRs in a given environment and update their comments in DCRRegistry and Reltio.This can be accomplished by following the procedure described below.ProcedureStep 1 - Configure the solutionGo to tools/dcr-update-workflow-comments module in mdm-hub-inbound-services repository.Prepare env configuration. Provide mongo.dbName and manager.url in application.yaml file.Create a file named application-secrets.yaml. Copy the content from application-secretsExample.yaml file and replace mock values with real ones appropriate to a given environment.Prepare solution configuration. Provide desired mode (find/repair) and DCR endTime time limits for deficient DCRs search in application.yaml.Here is an example of update-comments configuration.application.yaml\nupdate-comments:\n mode: find\n starting: 2024-04-01T10:00:00Z\n ending: 2024-05-15T10:00:00Z\nStep 2 - Find deficient DCRsRun the application using ApplicationServiceRunner.java in find mode with Spring profile: secrets.As a result, dcrs.csv file will appear in resources directory. It contains a list of DCRs to be updated in the next step. Those are DCRs ended within the configuration time limits, with no changeRequestComment field in DCRRegistry and having not empty processInstanceId (that value is needed to retrieve workflow comments from Reltio). This list can be viewed and altered if there is a need to omit a specific DCR update.Step 3 - Repair the DCRsChange update-comments.mode configuration to repair. Run the application exactly the same as in Step 2.As a result, report.txt file will be created in resources directory. It will contain a log for every DCR with its update status. If the update fails, it will contain the reason. In case of failed updated, the application can be ran again with dcrs.csv needed adjustments."
},
{
"title": "GBLUS DCRs:",
"pageID": "310966586",
"pageLink": "/pages/viewpage.action?pageId=310966586",
"content": ""
},
{
"title": "ICUE VRs manual load from file",
"pageID": "310966588",
"pageLink": "/display/GMDM/ICUE+VRs+manual+load+from+file",
"content": "This SOP describes the manual load of selected ICUE DCRS to the GBLUS environment.Scope and issue description:On GBLUS PROD VRs(DCRs) are sent to IQVIA(ONEKEY) for validation using events. The process is responsible for this is described on this page (OK DCR flows (GBLUS)). IQVIA receives the data based on singleton profiles. The current flow enables only GRV and ENGAGE. ICUE was disabled from the flow and requires manual work to load this to IQVIA due to a high number of ICUE standalone profiles created by this system on January/February 2023. More details related to the ICUE issue are here:ODP_ US IQVIA DRC_VR Request for 2023.msgDCR_Counts_GBLUS_PROD.xlsxSteps to add ICUE in the IQVIA validation process:Check if there are no loads on environment GBLUS PROD:Check reltio-* topics and check if there are no huge number of events per minute and if there is no LAG on topics:Pick the input file from a client and after approval from Monica.Mulloy@COMPANY.com proceed with changes:example email and input file:First batch_ Leftover ICUE VRs (27th Feb-31st March).msgGenerate the events for the VR topic- id: onekey_vr_dcrs_manual destination: "${env}-internal-onekeyvr-in"Reconciliation target ONEKEY_DCRS_MANUALuse the resendLastEvent operation in the publisher (generate CHANGES events)After all events are pushed to topic verify on akhq if generated events are available on desired topicWait for events aggregation window closure(24h).Check if VR's are visible in DCRRequests mongo collection. createTime should be within the last 24h\n{ "entity.uri" : "entities/<entity_uri>" }\n"
},
{
"title": "HL DCR:",
"pageID": "302705613",
"pageLink": "/pages/viewpage.action?pageId=302705613",
"content": ""
},
{
"title": "How do we answer to requests about DCRs?",
"pageID": "416002490",
"pageLink": "/pages/viewpage.action?pageId=416002490",
"content": ""
},
{
"title": "EFK:",
"pageID": "284806852",
"pageLink": "/pages/viewpage.action?pageId=284806852",
"content": ""
},
{
"title": "FLEX Environments - Elasticsearch Shard Limit",
"pageID": "513736765",
"pageLink": "/display/GMDM/FLEX+Environments+-+Elasticsearch+Shard+Limit",
"content": "AlertSometimes, below alert gets triggered:This means that Elasticsearch has allocated >80% of allowed number of shards (default 1000 max).Further DebuggingAlso, we can check directly on the EFK cluster what is the shard count:Log into Kibana and choose "Dev Tools" from the panel on the left:Use one of below API calls:To fetch current cluster status and number of active/unassigned shards (# of active shards + # of unassigned shards = # of allocated shards):GET _cluster/healthTo check the current assigned shards limit:GETSolution: Removing Old Shards/IndicesThis is the preferred solution. Old indices can be removed through Kibana.Log into Kibana and choose "Management" from the panel on the left:Choose "Index Management":Find and mark indices that can be removed. In my case, I searched for indices containing "2023" in their names:Click "Manage Indices" and "Delete Indices". Confirm:Solution: Increasing the LimitThis is not the preferred solution, as it is not advised to go beyond the default limit of 1000 shards per node - it can lead to worse performance/stability of the Elasticsearch cluster.TODO: extend this section when we need to increase the limit somewhere, use this article: https://www.elastic.co/guide/en/elasticsearch/reference/7.4/misc-cluster.html"
},
{
"title": "Kibana: How to Restore Data from Snapshots",
"pageID": "284806856",
"pageLink": "/display/GMDM/Kibana%3A+How+to+Restore+Data+from+Snapshots",
"content": "NOTE: The time of restoring is based on the amount of data you wanted to restore. Before beginning of restoration you have to be sure that the elastic cluster has a sufficient amount of storage to save restoring data.To restore data from the snapshot you have to use "Snapshot and Restore" site from Kibana. It is one of sites avaiable in "Stack Management" section:Select the snapshot which contains data you are interested in and click the Restore button:In the presented wizard please set up the following options:Disable the option "All data streams and indices" and provide index patterns that match index or data stream you want to restore:It is important to enable option "Rename data streams and indices" and set "Capture pattern" as "(.+)" and "Replacement pattern" as "$1-restored-<idx>", where the idx <1, 2, 3, ... , n> - it is required once we restore more than one snapshot from the same datastream. In another case, the restore operation will override current elasticsearch objects and we lost the data:The rest of the options on this page have to be disabled:Click the "Next" button to move to "Index settings" page. Leave all options disabled and go to the next page.On the page "Review restore details" you can see the summary of the restore process settings. Validate them and click the "Restore snapshot" button to start restoring.You can track the restoration progress in "Restore Status" section:When data is no longer needed, it should be deleted:"
},
{
"title": "External proxy",
"pageID": "379322691",
"pageLink": "/display/GMDM/External+proxy",
"content": ""
},
{
"title": "No downtime Kong restart/upgrade",
"pageID": "379322693",
"pageLink": "/pages/viewpage.action?pageId=379322693",
"content": "This SOP describes how to perform "no downtime" restart. Resourceshttp://awsprodv2.COMPANY.com/ - AWS consolehttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/ansible/install_kong.yml - ansible playbook SOPRemove one node instance from target groups (AWS console)Access AWS console http://awsprodv2.COMPANY.com/. Log in using COMPANY SSOChoose Account: prod-dlp-wbs-rapid (432817204314). Role: WBS-EUW1-GBICC-ALLENV-RO-SSOChange region to Europe(Ireland - eu-west-1)Got to EC2 → Load Balancing → Target GroupsSearch for target group\n-prod-gbl-mdm\nThere should be 4 target groups visible. 1 for mdmhub api and 3 for KafkaRemove first instance (EUW1Z2DL113) from all 4 target groups.Perform below steps for all target groupsTo do so, open each target group select desired instance and choose 'deregister'. Now this instance should have 'Health status': 'Draining'. Next do the same operation for other target groups.Do not remove two instances from consumer group at the same time. It'll cause API unabailability.Also make sure to remove the same instance from all target groups.Wait for Instance to be removed from target groupWait for target groups to be adjusted. Deregistered instance should eventually be removed from target groupAdditionally you can check kong logs directlyFirst instance: \nssh ec2-user@euw1z2dl113.COMPANY.com\ncd /app/kong/\ndocker-compose logs -f --tail=0\n# Check if there are new requests to exteral api\nSecond isntance: \nssh ec2-user@euw1z2dl114.COMPANY.com\ncd /app/kong/\ndocker-compose logs -f --tail=0\n# Check if there are new requests to exteral api\nSome internal requests may be still visible, eg. metricsPerform restart of Kong on removed instance (Ansible playbook)Execute ansible playbook inside mdm-hub-cluster-env repository inside 'ansible' directoryFor the first instance:\nansible-playbook install_kong.yml -i inventory/proxy_prod/inventory  -l kong_01\nFor the second instance:\nansible-playbook install_kong.yml -i inventory/proxy_prod/inventory  -l kong_02\nMake sure that kong_01 is the same instance you've removed from target group(check ansible inventory)Re-add the removed instancePerform this steps for all target groupsSelect target groupChoose 'Register targets'Filter instances to find previously removed instance. Select it and choose 'Include as pending below'. Make sure that correct port is chosenVerify below request and select 'Register pending targets'Instance should be in 'Initial' state in target groupWait for instance to be properly added to target groupWait for all instances to have 'Healthy' status instead of 'Initial'. Make sure everything work as expected (Check Kong logs)Perform steps 1-5 for second Kong instanceSecond instance: euw1z2dl114.COMPANY.comSecond Kong host(ansible inventory): kong_02"
},
{
"title": "Full Environment Refresh - Reltio Clone",
"pageID": "386803861",
"pageLink": "/display/GMDM/Full+Environment+Refresh+-+Reltio+Clone",
"content": ""
},
{
"title": "Full Environment Refresh",
"pageID": "386803864",
"pageLink": "/display/GMDM/Full+Environment+Refresh",
"content": "IntroductionBelow steps are the record of steps done in January 2024 due to Reltio Data Clone between GBLUS PROD → STAGE and APAC PROD → STAGE.Environment refresh consists of:disabling MDM Hub componentsfull cleanup of existing STAGE data: Kafka and MongoDBidentifying and copying cache collections from PROD to STAGE MongoDBre-enabling MDM Hub componentsrunning the Hub Reconciliation DAGDisabling Services, Kafka CleanupComment out the EFK topics in fluentd configuration:\nmdm-hub-cluster-env\\apac\\nprod\\namespaces\\apac-backend\\values.yaml\nDeploy apac-backend through Jenkins, to apply the fluentd changes:https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_backend_apac_nprod/(fluentd pods in the apac-backend namespace should recreate)Block the apac-stage mdmhub deployment job in Jenkins:https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/Notify the monitoring/support Team, that the environment is disabled (in case alerts are triggered or users inquire via emails)Use Kubernetes & Helm command line tools to uninstall the mdmhub components and Kafka topics:use kubectx/kubectl to switch context to apac-nprod cluster:use helm to uninstall below two releases from the apac-nprod cluster (you can confirm release names by using the "$ helm list -A" command):\n$ helm uninstall mdmhub -n apac-stage\n$ helm uninstall kafka-resources-apac-stage -n apac-backend\nconfirm there are no pods in the apac-stage namespace:list remaining Kafka topics (kubernetes kafkatopic resources) with "apac-stage" prefix:manually remove all the remaining "apac-stage" prefixed topics. Note that it is expected that some topics remain - some of them have been created by Kafka Streams, for example.MongoDB CleanupLog into the APAC NPROD MongoDB through Studio 3T.Clear all the collections in the apac-stage database.Exceptions:"batchInstance" collection"quartz-" prefixed collections"shedLock" collectionWait until MongoDB cleans all these collections (could take a few hours):Log into the APAC PROD MongoDB through Studio 3T. You want to have both connections in the same session.Copy below collections from APAC PROD (Ctrl+C):keyIdRegistryrelationCachesequenceCountersRight click APAC NPROD database "apac-stage" and choose "Paste Collections"Dialog will appear - use below options for each collection:Collections Copy Mode: Append to existing target collectionDocuments Copy Mode: Overwrite documents with same _idCopy indices from the source collection: uncheckWait until all the collections are copied.Snowflake CleanupCleanup the base tables:\nTRUNCATE TABLE CUSTOMER.ENTITIES;\nTRUNCATE TABLE CUSTOMER.RELATIONS;\nTRUNCATE TABLE CUSTOMER.LOV_DATA;\nTRUNCATE TABLE CUSTOMER.MATCHES;\nTRUNCATE TABLE CUSTOMER.MERGES;\nTRUNCATE TABLE CUSTOMER.HIST_INACTIVE_ENTITIES;\nRun the full materialization jobs:\nCALL CUSTOMER.MATERIALIZE_FULL_ALL('M', 'CUSTOMER');\nCALL CUSTOMER.HI_MATERIALIZE_FULL_ALL('CUSTOMER');\nCheck for any tables that haven't been cleaned properly:\nSELECT *\nFROM INFORMATION_SCHEMA.TABLES\nWHERE 1=1\nAND TABLE_TYPE = 'BASE TABLE'\nAND TABLE_NAME ILIKE 'M^_%' ESCAPE '^'\nAND ROW_COUNT != 0;\nRun the materialization for those tables specifically or you can run the queries prepared from the bellow query:\nSELECT 'TRUNCATE TABLE ' || TABLE_SCHEMA || '.' || TABLE_NAME || ';'\nFROM INFORMATION_SCHEMA.TABLES\nWHERE 1=1\nAND TABLE_TYPE = 'BASE TABLE'\nAND TABLE_NAME ILIKE 'M^_%' ESCAPE '^'\nAND ROW_COUNT != 0;\nRe-Enabling HubGet a confirmation that the Reltio data cloning process has finished.Re-enable the mdmhub apac-stage deployment job and perform a deployment of an adequate version.Uncomment previously commented (look: Disabling The Services, Kafka Cleanup, 1.) EFK transaction topic list, deploy apac-backend. Fluentd pods in the apac-backend namespace should recreate.Wait for both deployments to finish (should be performed one after another).Test the MDM Hub API - try sending a couple of GET requests to fetch some entities that exist in Reltio. Confirm that the result is correct and the requests are visible in Kibana (dashboard APAC-STAGE API Calls):(2025-05-19 Piotr: we no longer need to do this - Matches Enricher now deploys with minimum 1 pod in every environment) Run below command in your local Kafka client environment.\nkafka-console-consumer.sh --bootstrap-server kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094 --group apac-stage-matches-enricher --topic apac-stage-internal-reltio-matches-events --consumer.config client.sasl.properties\nThis needs to be done to create the consumergroup, so that Keda can scale the deployment in the future.Running The Hub ReconciliationAfter confirming that Hub is up and working correctly, navigate to APAC NPROD Airflow:https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com/homeTrigger the hub_reconciliation_v2_apac_stage DAG:To minimize the chances of overfilling the Kafka storage, set retention of reconciliation metrics topics to an hour:Navigate to APAC NPROD AKHQ:https://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com/Find below topics and navigate to their "Configs" tabs:apac-stage-internal-reconciliation-metrics-calculator-inhttps://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/apac-nprod-mdm-kafka/topic/apac-stage-internal-reconciliation-metrics-calculator-in/configsapac-stage-internal-reconciliation-metrics-efk-transactionshttps://akhq-apac-nprod-gbl-mdm-hub.COMPANY.com/ui/apac-nprod-mdm-kafka/topic/apac-stage-internal-reconciliation-metrics-efk-transactions/configsFor each topic, find the config "retention.ms" (do not mistake it with "delete.retention.ms", which is responsible for compaction) and set it to 3600000. Apply changes.Monitor the DAG, event processing and Kafka/Elasticsearch storage.After the DAG finishes, disable reconciliation jobs (if reconciliations start uncontrollably before the data is fully restored, it will unnecessarily increase the workload):Manually disable the hub_reconciliation_v2_apac_stage DAG: https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com/dags/hub_reconciliation_v2_apac_stage/gridManually disable the reconciliation_snowflake_apac_stage DAG: https://airflow-apac-nprod-gbl-mdm-hub.COMPANY.com/dags/reconciliation_snowflake_apac_stage/gridAfter all reconciliation events are processed, the environment is ready to use. Compare entity/relation counts between Reltio-MongoDB-Snowflake to confirm that everything went well.Re-enable reconciliation jobs from 5."
},
{
"title": "Full Environment Refresh - Legacy (Docker Environments)",
"pageID": "164470082",
"pageLink": "/pages/viewpage.action?pageId=164470082",
"content": "Steps to take when a Hub environment needs to be cleaned up or refreshed.1.PreparationAdd line ssl.endpoint.identification.algorithm= to client.sasl.properties in your kafka_client folder.Having done that go to the <kafka_client path>/bin folder and launch the command:$ ./consumer_groups_sasl.sh --describe --group <group_name> | sortFor every consumer group in this environment. This will list currently connected consumers.If there are external consumers connected they will prevent deletion of topics they're connected to. Contact people responsible for those consumers to disconnect them.2. Stop GW/Hub components: subscriber, publisher, manager, batch_channel$ docker stop <container name>3. Double-check that consumer groups (internal and external) have been disconnected4. Delete all topics:a) Preparation:$ docker exec -it kafka_kafka_1 bash$ export KAFKA_OPTS=-Djava.security.auth.login.config=/ssl/kafka_server_jaas.conf$ kafka-topics.sh --zookeeper zookeeper:2181 --list | grep <env name>b) Deleting the topics:$ kafka-topics.sh --zookeeper zookeeper:2181 --delete --topic <topic1> || true && \\kafka-topics.sh --zookeeper zookeeper:2181 --delete --topic <topic2> || true&& \\kafka-topics.sh --zookeeper zookeeper:2181 --delete --topic <topic3>  || true &&          (...) continue for all topics5. Check whether topics are deleted on disk and using $ ./topics.sh --list 6. Recreate the topics by launching the Ansible playbook with parameter create_or_update: True set for desired topics in topics.yml7. Cleanup MongoDB:Access the collections corresponding to the desired environment and choose option "Clear collections" on the following collections: "entityHistory","gateway_errors", "hub_errors", hub_reconcilliation.8. After confirming everything is ready (in case of environment refresh there has to be a notification from Reltio that it's ready) restart GW and Hub components9. Check component logs to confirm they started up and connected correctly."
},
{
"title": "Hub Application:",
"pageID": "302706338",
"pageLink": "/pages/viewpage.action?pageId=302706338",
"content": ""
},
{
"title": "Batch Channel: Importing MAPP's Extract",
"pageID": "164470063",
"pageLink": "/display/GMDM/Batch+Channel%3A+Importing+MAPP%27s+Extract",
"content": "To import MAPP's extract you have to:Have original extract (eg. original.csv) which was uploaded to Teams channel,Open it in Excel and save as "CSV (Comma delimited) (*.csv)",Run dos2unix tool on the file.Do steps from 2 and 3 on extract file (eg. changes.csv) received form MAPP's team,Compare original file to file with changes and select only lines which was changed in the second file: ( head -1 changes.csv && diff original.csv changes.csv | grep '^>' | sed 's/^> //' ) > result.csvDivide result file into the smaller ones by running splitFile.sh script: ./splitFile.sh  result.csv. The script will generate set of files where theirs names will end with _{idx}.{extension} eg.: result_00.csv, result_01.csv, result_02.csv etc.Upload the result set of files to s3 location: s3://pfe-baiaes-eu-w1-project/mdm/inbound/mapp/. This action will trigger batch-channel component, which will start loading changes to MDM.splitFile.sh"
},
{
"title": "Callback Service: How to Find Events Stuck in Partial State",
"pageID": "273681936",
"pageLink": "/display/GMDM/Callback+Service%3A+How+to+Find+Events+Stuck+in+Partial+State",
"content": "What is partial state?When an event gets processed by Callback Service, if any change is done at the precallback stage, event will not be sent further, to Event Publisher. It is expected that in a few seconds another event will come, signaling the change done by precallback logic - this one gets passed to Publisher and downstream clients/Snowflake as far as precallback detects no need for a change.Sometimes the second event is not coming - this is what we call a partial state. It means, that update event will actually not reach Snowflake and downstream clients. PartialCounter functionality of CallbackService was implemented to monitor such behaviour.How to identify that an event is stuck in partial state?PartialCounter is counting events which have not been passed down to Event Publisher (identified by Reltio URI) and exporting this count as a Prometheus (Actuator) metric. Prometheus alert "callback_service_partial_stuck_24h" is notifying us that an event has been stuck for more than 24 hours.How to find events stuck in partial state?Use below command to fetch the list of currently stuck events as JSON array (example for emea-dev). You will have to authorize using mdm_test_user or mdm_admin:\n# curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-emea-dev/precallback/partials\nMore details can be found in Swagger Documentation: https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-admin-spec-emea-dev/swagger-ui/index.html#/What to do?Events identified as stuck in partial state should be reconciled."
},
{
"title": "Integration Test - how to run tests locally from your computer to target environment",
"pageID": "337839648",
"pageLink": "/display/GMDM/Integration+Test+-+how+to+run+tests+locally+from+your+computer+to+target+environment",
"content": "Steps:First, choose the environment and go to the Jenkins integration tests directory:https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/based on APAC DEV:go to https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/choose the latest RUN and click Workspace on the leftClick on /home/jenkins workspace linkGo to /code/mdm-integretion-tests/src/test/resources/ Download 3 filescitrus-application.propertieskafka_jaas.confkafka_truststore.jksEdit citrus-application.propertieschange local K8s URLS to real URLS and local PATH. Leave other variables as is. in that case, use the KeePass that contains all URLs:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/credentials.kdbxExample code that is adjusted to APAC DEVAPI URLs + local PATH to certsThis is just the example from APAC DEV that contains the C:\\\\Users\\\\mmor\\\\workspace\\\\SCM\\\\mdm-hub-inbound-services\\\\ path - replace this with your own code localization \ncitrus.spring.java.config=com.COMPANY.mdm.tests.config.SpringConfiguration\n\njava.security.auth.login.config=C:\\\\Users\\\\mmor\\\\workspace\\\\SCM\\\\mdm-hub-inbound-services\\\\mdm-integretion-tests\\\\src\\\\test\\\\resources\\\\kafka_jaas.conf\n\nreltio.oauth.url=https://auth.reltio.com/\nreltio.oauth.basic=secret\nreltio.url=https://mpe-02.reltio.com/reltio/api/2NBAwv1z2AvlkgS\nreltio.username=svc-pfe-mdmhub\nreltio.password=secret\nreltio.apiKey=secret\nreltio.apiSecret=secret\n\nmongo.dbUrl=mongodb://admin:secret@mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017/reltio_apac-dev?authMechanism=SCRAM-SHA-256&authSource=admin\nmongo.url=mongodb://mongo-apac-nprod-gbl-mdm-hub.COMPANY.com:27017\nmongo.dbName=reltio_apac-dev\nmongo.username=mdmgw\nmongo.password=secret\n\ngateway.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-gw-apac-dev\ngateway.username=mdm_test_user\ngateway.apiKey=secret\n\nbatchService.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-batch-apac-dev\nbatchService.username=mdm_test_user\nbatchService.apiKey=secret\nbatchService.limitedUsername=mdm_test_user_limited\nbatchService.limitedApiKey=secret\n\nmapchannel.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/dev-map-api\nmapchannel.username=mdm_test_user\nmapchannel.apiKey=secret\n\napiRouter.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-apac-dev\napiRouter.dcrReltioUserApiKey=secret\napiRouter.dcrOneKeyUserApiKey=secret\napiRouter.intTestUserApiKey=secret\napiRouter.dcrReltioUser=mdm_dcr2_test_reltio_user\napiRouter.dcrOneKeyUser=mdm_dcr2_test_onekey_user\napiRouter.intTestUser=mdm_test_user\n\nadminService.url=https://api-apac-nprod-gbl-mdm-hub.COMPANY.com/api-admin-apac-dev\nadminService.intTestUserApiKey=secret\nadminService.intTestUser=mdm_test_user\n\ndeg.url=https://hcp-gateway-dev.eu.cloudhub.io/v1\ndeg.oAuth2Service=https://hcp-gateway-dev.eu.cloudhub.io/\ndeg.apiKey=secret\ndeg.apiSecret=secret\n\nkafka.brokers=kafka-apac-nprod-gbl-mdm-hub.COMPANY.com:9094\nkafka.group=int_test_dev\nkafka.topic=apac-dev-out-simple-all-int-tests-all\nkafka.security.protocol=SASL_SSL\nkafka.sasl.mechanism=SCRAM-SHA-512\nkafka.ssl.truststore.location=C:\\\\Users\\\\mmor\\\\workspace\\\\SCM\\\\mdm-hub-inbound-services\\\\mdm-integretion-tests\\\\src\\\\test\\\\resources\\\\kafka_truststore.jks\nkafka.ssl.truststore.password=secret\nkafka.receive.timeout=60000\nkafka.purgeEndpoints.timeout=100000\n...\n...\n...\nNow go to your local code checkout - mdm-hub-inbound-services\\mdm-integretion-testsCopy 3 files to the mdm-integretion-tests/src/test/resourcesSelect the test and click RUNEND - the result: You are running Jenkins integration tests from your local computer on target DEV environment. Now you can check logs locally and repeat. "
},
{
"title": "Manager: Reload Entity - Fix COMPANYAddressID Using Reload Action",
"pageID": "229180577",
"pageLink": "/display/GMDM/Manager%3A+Reload+Entity+-+Fix+COMPANYAddressID+Using+Reload+Action",
"content": "Before starting check what DQ rules have -reload action on the list. Now it is SourceMatchCategory and COMPANYAddressIdcheck here - - example dq ruleupdate with -reload operation to reload more DQ rulesGenerate events using the script : scriptorscript - fix SourceMatchCategory without ONEKEYthe script gets all ACTIVE entities with Addressesthat have missing COMPANYAddressIdthat COMPANYAddressID is lower that correct value for each env: emea 5000000000  amer 6000000000  apac 7000000000Script generate events: example:entities/lwBrc9K|{"targetEntity":{"entityURI":"entities/lwBrc9K","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}entities/1350l3D6|{"targetEntity":{"entityURI":"entities/1350l3D6","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}entities/1350kZNI|{"targetEntity":{"entityURI":"entities/1350kZNI","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}entities/cPSKBB9|{"targetEntity":{"entityURI":"entities/cPSKBB9","sources":["FUSIONMDM"],"targetType":"entityUri"},"overwrites":[{"uriMask":"COMPANYAddressID"}]}Make a fix for COMPANYAddressID that is lower than the correct value for each envGo to the keyIdRegistry Mongo collectionfind all entries that have generatedId lower than emea 5000000000  amer 6000000000  apac 7000000000increase the generatedId  adding the correct value from correct environments using the script - scriptGet the file and push it to the <env>-internal-async-all-reload-entity topic./start_sasl_producer.sh <env>-internal-async-all-reload-entityor using the input file  ./start_sasl_producer.sh <env>-internal-async-all-reload-entity < reload_dev_emea_pack_entities.txt (file that contains each json generated by the Mongo script, each row in new line)How to Run a script on docker:example emea DEV:go to - svc-mdmnpr@euw1z2dl111docker exec -it mongo_mongo_1 bashcd  /data/configdbcreate script - touch reload_entities_fix_COMPANYaddressid_hub.jsedit header:db = db.getSiblingDB("<DB>")db.auth("mdm_hub", "<PASS>")RUN: nohup mongo --host mongo_dev_emea_reltio_rs/euw1z2dl111.COMPANY.com:27017 -u mdm_hub -p <PASS> --authenticationDatabase reltio_dev reload_entities_fix_COMPANYaddressid_hub.js &ORnohup mongo --host mongo_dev_emea_reltio_rs/euw1z2dl111.COMPANY.com:27017 -u mdm_hub -p <PASS> --authenticationDatabase reltio_dev reload_entities_fix_sourcematch_hub_DEV.js > smc_DEV_FIX.out 2>&1 &nohup mongo --host mongo_dev_emea_reltio_rs/euw1z2dl111.COMPANY.com:27017 -u mdm_hub -p <PASS> --authenticationDatabase reltio_qa reload_entities_fix_sourcematch_hub_QA.js > smc_QA_FIX.out 2>&1 &nohup mongo --host mongo_dev_emea_reltio_rs/euw1z2dl111.COMPANY.com:27017 -u mdm_hub -p <PASS> --authenticationDatabase reltio_stage reload_entities_fix_sourcematch_hub_STAGE.js > smc_STAGE_FIX.out 2>&1 &"
},
{
"title": "Manager: Resubmitting Failed Records",
"pageID": "164470200",
"pageLink": "/display/GMDM/Manager%3A+Resubmitting+Failed+Records",
"content": "There is new API in manager for getting/resubmitting/removing failed records from batches.1. Get failed records method - it returns list of errors basing on provided criteriasPOST /errorsRequestList of FieldFilter objectsfield - name of the field that is stored in errorqueueoperation - operation that is used to create query, possible options are: Equals, Is, Greater, Lowervalue - the value which we compareii. Example:[        {            "field" : "HubAsyncBatchServiceBatchName",            "operation" : "Equals",            "value" : "testBatchBundle"        }    ]b. Responsei. List of Error objectsid - identifier of the error batchName - batch nameobjectType - object typebatchInstanceId - batch instance idkey - keyerrorClass - the name of the error class that happen during record submissionerrorMessage - the message of the error that happen during record submissionresubmitted - true/false - it tells if errror was resubmitted or notdeleted - true/false - it tells if error was deleted or not during remove api callii. Example:[    {        "id": "5fa93377e720a55f0bb68c99",        "batchName": "testBatchBundle",        "objectType": "configuration/entityTypes/HCP",        "batchInstanceId": "0+3j45V7S1K1GT2i6c3Mqw",        "key": "{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:b09b6085-28dc-451d-85b6-fe3ce2079446\\"\\r\\n}",        "errorClass": "javax.ws.rs.ClientErrorException",        "errorMessage": "HTTP 409 Conflict",        "resubmitted": false,        "deleted": false    },    {        "id": "5fa93378e720a55f0bb68ca6",        "batchName": "testBatchBundle",        "objectType": "configuration/entityTypes/HCP",        "batchInstanceId": "0+3j45V7S1K1GT2i6c3Mqw",        "key": "{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:25bfc672-9ba1-44a5-b3c1-d657de701d76\\"\\r\\n}",        "errorClass": "javax.ws.rs.ClientErrorException",        "errorMessage": "HTTP 409 Conflict",        "resubmitted": false,        "deleted": false    },    {        "id": "5fa93377e720a55f0bb68c9a",        "batchName": "testBatchBundle",        "objectType": "configuration/entityTypes/HCP",        "batchInstanceId": "0+3j45V7S1K1GT2i6c3Mqw",        "key": "{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:60067d46-07a6-4902-b9e8-1bf2acbc8a6e\\"\\r\\n}",        "errorClass": "javax.ws.rs.ClientErrorException",        "errorMessage": "HTTP 409 Conflict",        "resubmitted": false,        "deleted": false    },    {        "id": "5fa93377e720a55f0bb68c9b",        "batchName": "testBatchBundle",        "objectType": "configuration/entityTypes/HCP",        "batchInstanceId": "0+3j45V7S1K1GT2i6c3Mqw",        "key": "{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:e8d05d96-7aa3-4059-895e-ce20550d7ead\\"\\r\\n}",        "errorClass": "javax.ws.rs.ClientErrorException",        "errorMessage": "HTTP 409 Conflict",        "resubmitted": false,        "deleted": false    },    {        "id": "5fa96ba300061d51e822854a",        "batchName": "testBatchBundle",        "objectType": "configuration/entityTypes/HCP",        "batchInstanceId": "iN2LB3TiT3+Sd5dYemDGHg",        "key": "{\\r\\n  \\"type\\" : \\"SHS\\",\\r\\n  \\"value\\" : \\"TEST:HCP:973411ec-33d4-477e-a6ae-aca5a0875abb\\"\\r\\n}",        "errorClass": "javax.ws.rs.ClientErrorException",        "errorMessage": "HTTP 409 Conflict",        "resubmitted": false,        "deleted": false    }]2. Resubmit failed records - it takes list of FieldFilter objects and returns list of errors that were resubmitted - if it was correctly resubmitted resubmitted flag is set to truePOST /errors/_resubmita.  Requesti. List of FieldFilter objectsb. Responsei. List of Error objects3. Remove failed records - it takes list of FieldFilter objects that contains criteria for removing error objects and returns list of errors that were deleted - if it was correctly deleted deleted flag is set to truePOST /errors/_removea.  Requesti. List of FieldFilter objectsb. Responsei. List of Error objects"
},
{
"title": "Issues diagnosis",
"pageID": "438905271",
"pageLink": "/display/GMDM/Issues+diagnosis",
"content": ""
},
{
"title": "API issues",
"pageID": "438905273",
"pageLink": "/display/GMDM/API+issues",
"content": "Symptomsat least one of the following alert is active:kong_http_500_status_prod,kong_http_502_status_prod,kong_http_503_status_prod,kong3_http_500_status_prod,kong3_http_502_status_prod,kong3_http_503_status_prod,Clients report problems related to communication with our HTTP endpoints.ConfirmationTo confirm if problem with API is really occurring, you have to invoke some operation that is shared by HTTP interface. To do this you can use Postman or other tool that can run HTTP requests. Below you can find a few examples that describe how to check API in components that expose this:mdm-manager:GET {{ manager_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP') - The request should execute properly (HTTP status code 200) and returns some HCP objects.api-router:GET {{ api_router_url }}/entities?filter=equals(type, 'configuration/entityTypes/HCP') - The request should execute properly (HTTP status code 200) and returns some HCP objects.batch-service:GET {{ batch_service_url }}/batchController/NA/instances/NA - The request should return 403 HTTP Code and body:{    "code": "403",    "message": "Forbidden: com.COMPANY.mdm.security.AuthorizationException: Batch 'NA' is not allowed."}dcr-service2:TODOReasons findingBelow diagram presents the HTTP request processing flow with engaged components:"
},
{
"title": "Kafka:",
"pageID": "164470059",
"pageLink": "/pages/viewpage.action?pageId=164470059",
"content": ""
},
{
"title": "Client Configuration",
"pageID": "243862610",
"pageLink": "/display/GMDM/Client+Configuration",
"content": "      1. InstallationTo install kafka binary version 2.8.1 should be downloaded and installed fromhttps://kafka.apache.org/downloads      2. The email from the MDMHUB TeamIn the email received from the MDMHUB support team you can find connection parameters like server address, topic name, group name, and the following files:client.sasl.properties kafka consumer properties,kafka_client_jaas.conf JAAS credentials requiered to authenticate with Kafka server,kafka_truststore.jks java truststore required to build certification path of SSL connections.      3. Example command to test client and configurationTo connect with Kafka using the command line client save delivered files on your disc and run the following command:export KAFKA_OPTS=-Djava.security.auth.login.config={ ●●●●●●●●●●●● Kafka_client_jaas.conf }kafka-console-consumer.sh --bootstrap-server { kafka server } --group { group } --topic { topic_name } --consumer.config { consumer config file eg. client.sasl.properties}For example for amer dev:●●●●●●●●●●● in provided file: kafka_client_jaas.confKafka server: kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094Group: dev-muleTopic: dev-out-full-pforcerx-grv-allConsumer config is in provided file: client.sasl.propertiesexport KAFKA_OPTS=-Djava.security.auth.login.config=kafka_client_jaas.confkafka-console-consumer.sh --bootstrap-server kafka-amer-nprod-gbl-mdm-hub.COMPANY.com:9094 --group dev-mule --topic dev-out-full-pforcerx-grv-all --consumer.config client.sasl.properties"
},
{
"title": "Client Configuration in k8s",
"pageID": "284806978",
"pageLink": "/display/GMDM/Client+Configuration+in+k8s",
"content": "Each of k8s clusters have installed kafka-client pod. To find this pod you have to list all pods deployed in *-backend namespace and select pod which name starts with kafka-client:\nkubectl get pods --namespace emea-backend  | grep kafka-client\nTo run commands on this pod you have to remember its name and use in "kubectl exec" command:Using kubectl exec with kafka client\nkubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- <command>\nAs a <command> you can use all of standard Kafka client scripts eg. kafka-consumer-groups.sh or one of wrapper scripts which simplify configuration of standard scripts - broker and authentication configuration. They are the following scripts:consumer_groups.sh - it's wrapper of kafka-consumer-groups,consumer_groups_delete.sh - it's also wrapper of kafka-consumer-groups and can be used only to delete consumer group. Has only one input argument - consumer group name,reset_offsets.sh - it's also wrapper of kafka-consumer-groups and can be used only to reset offsets of consumer group,start_consumer.sh - it's wrapper of kafka-console-consumer,start_producer.sh - it's wrapper of kafka-console-producer,topics.sh - it's wrapper of kafka-topics.Kafka-client pod has other kafka tool named kcat. To use this tool you have to run commands on container kafka-kcat unsing wrapper script kcat.sh:Running kcat.sh on emea-nprod cluster\nkubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -c kafka-kcat -- kcat.sh\nNOTE: Remember that all wrapper scripts work with admin permissions.ExamplesDescribe the current offsets of a groupDescribe group dev_grv_pforcerx on emea-nprod cluster\nkubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- consumer_groups.sh --describe --group dev_grv_pforcerx\nReset offset of group to earlisetReset offset to earliest for group group1 and topic gbl-dev-internal-gw-efk-transactions on emea-nprod cluster\nkubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- reset_offsets.sh --group group1 --to-earliest gbl-dev-internal-gw-efk-transactions\nConsumer events from the beginning of topic. It will produce the output where each of lines will have the following format: <message key>|<message body>Read topic gbl-dev-internal-gw-efk-transactions from beginning on emea-nprod cluster\nkubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- start_consumer.sh gbl-dev-internal-gw-efk-transactions --from-beginning\nSend messages defined in text file to kafka topics. Each of message in file have to have following format: <message key>|<message body>Send all messages from file file_with_messages.csv to topic gbl-dev-internal-gw-efk-transactions\nkubectl exec -i --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- start_producer.sh gbl-dev-internal-gw-efk-transactions < file_with_messages.csv\nDelete consumer group on topicDelete consumer group test on topic gbl-dev-internal-gw-efk-transactions emea-nprod cluster\nkubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -- consumer_groups.sh --delete-offsets --group test gbl-dev-internal-gw-efk-transactions\nList topics and their partitions using kcatList topcis into on emea-nprod cluster\nkubectl exec --namespace emea-backend kafka-client-8585fbb7f9-55cjm -c kafka-kcat -- kcat.sh -L\n"
},
{
"title": "How to Add a New Consumer Group",
"pageID": "164470080",
"pageLink": "/display/GMDM/How+to+Add+a+New+Consumer+Group",
"content": "These instructions demonstrate how to add an additional consumer group to an existing topic.Open file "topics.yml" located under mdm-reltio-handler-env\\inventory\\<environment_name>\\group_vars\\kafka and find the topic to be updated. In this example new consumer group "flex_dev_prj2" was added to topic "dev-out-full-flex-all".   2. Make sure the parameter "create_or_update" is set to True for the desired topic:   3.  Additionally, double-check that the parameter "install_only_topics" in the "all.yml" file is set to True:    4. Save the files after making the changes. Run ansible to update the configuration using the following command:  ansible-playbook install_hub_broker.yml -i inventory/<environment_name>/inventory --limit broker1 --vault-password-file=~/vault-password-file   5. Double-check ansible output to make sure changes have been implemented correctly.   6. Change the "create_or_update" parameter in "topics.yml" back to False.   7. Save the file and upload the new configuration to git. "
},
{
"title": "How to Generate JKS Keystore and Truststore",
"pageID": "164470062",
"pageLink": "/display/GMDM/How+to+Generate+JKS+Keystore+and+Truststore",
"content": "This instruction is based on the current GBL PROD Kafka keystore.jks and trustrore.jks generation. Create a certificate pair using keytool genkeypair command keytool -genkeypair -alias kafka.mdm-gateway.COMPANY.com -keyalg RSA -keysize 2048 -keystore server.keystore.jks -dname "CN=kafka.mdm-gateway.COMPANY.com, O=COMPANY, L=mdm_hub, C=US"  set the security password, set the same ●●●●●●●●●●●● the key passphraseNow create a certificate signing request ( csr ) which has to be passed on to our external / third party CA ( Certificate Authority ).keytool -certreq -alias kafka.mdm-gateway.COMPANY.com -file kafka.mdm-gateway.COMPANY.com.csr -keystore server.keystore.jks Send the csr file through the Request Manager:Log in to the BT On DemandGo to Request Manager.Click "Continue"Search for " Digital Certificates"Select the " Digital Certificates" Application and click "Continue"Click "Checkout"Select "COMPANY SSL Certificate - Internal Only" and fill:Copy CSR filefill SAN e.g from the GBL PROD Kafka: mdm-gateway.COMPANY.commdm-gateway-int.COMPANY.com●●●●●●●●●●●●●mdm-broker-p1.COMPANY.comEUW1Z1PL017.EUPWBS.COMeuw1z1pl017.COMPANY.com●●●●●●●●●●●●●mdm-broker-p2.COMPANY.comEUW1Z1PL021.EUPWBS.COMeuw1z1pl021.COMPANY.com●●●●●●●●●●●●●mdm-broker-p3.COMPANY.comEUW1Z1PL022.EUPWBS.COMeuw1z1pl022.COMPANY.comfill email addressselect "No" for additional SSL Cert request, ContinueSend the CSR reqeust.When you receive the signed certificate verify the certificateCheck the Subject: CN and O should be filled just like in the  1.a.Check the SAN: there should be the list of hosts from 3.g.ii.If the certificate is correct CONTINUE:Now we need to import these certificates into server.keystore.jks keystore. Import the intermediate certificate first --> then the root certificate --> and then the signed cert.keytool -importcert -alias inter -file PBACA-G2.cer -keystore server.keystore.jkskeytool -importcert -alias root -file RootCA-G2.cer -keystore server.keystore.jkskeytool -importcert -alias kafka.mdm-gateway.COMPANY.com -file kafka.mdm-gateway.COMPANY.com.cer -keystore server.keystore.jksAfter importing all three certificates you should see : "Certificate reply was installed in keystore" message.Now list the keystore and check if all the certificates are imported successfully.keytool -list -keystore server.keystore.jksYour keystore contains 3 entriesFor debugging start with "-v" parameterLets create a truststore now. Set the security ●●●●●●●●●● different than the keystorekeytool -import -file PBACA-G2.cer -alias inter -keystore server.truststore.jkskeytool -import -file RootCA-G2.cer -alias root -keystore server.truststore.jksCOMPANY Certificates:PBACA-G2.cer RootCA-G2.cer"
},
{
"title": "Reset Consumergroup Offset",
"pageID": "243862614",
"pageLink": "/display/GMDM/Reset+Consumergroup+Offset",
"content": "To reset offset on Kafka topic you need to have configured the command line client. The tool that can do this action is kafka-consumer-groups.sh. You have to specify a few parameters which determine where you want to reset the offset:--topic - the topic name,--group - the consumer group name,and specify the offset value by proving one of following parameters:1. --shift-byReset offsets shifting current offset by provided number which can be negative or positive:kafka-consumer-groups.sh --bootstrap-server { server } --group { group } -command-config {  client.sasl.properties } --reset-offsets --shift-by {  number from formula } --topic {  topic } --execute2. --to-datetimeSwitch which can be used to rest offset from datetime. Date should be in format YYYY-MM-DDTHH:mm:SS.ssskafka-consumer-groups.sh --bootstrap-server { server }--group { group } -command-config {  client.sasl.properties } --reset-offsets --to-datetime 2022-02-02T00:00:00.000Z --topic {  topic } --execute3. --to-earliestSwitch which can be used to reset the offsets to the earliest (oldest) offset which is available in the topic.kafka-consumer-groups.sh --bootstrap-server { server }--group { group } -command-config {  client.sasl.properties } --reset-offsets -to-earliest --topic {  topic } --execute4. --to-latestSwitch which can be used to reset the offsets to the latest (the most recent) offset which is available in the topic.kafka-consumer-groups.sh --bootstrap-server { server }--group { group } -command-config {  client.sasl.properties } --reset-offsets -to-latest --topic {  topic } --executeExampleLet's assume that you want to have 10000 messages to read by your consumer and the topic has 10 partitions. The first step is moving the current offset to the latest to make sure that there is no messages to read on the topic:kafka-consumer-groups.sh --bootstrap-server { server }--group { group } -command-config {  client.sasl.properties } --reset-offsets --to-latest --topic {  topic } --executeThen calculate the offset you need to shift to achieve requested lag using following formula:-1 * desired_lag / number_of_partitionsIn our example the result will be: -1 * 10000 / 10 = -1000. Use this value in the below  command:kafka-consumer-groups.sh --bootstrap-server { server } --group { group } -command-config {  client.sasl.properties } --reset-offsets --shift-by -1000 --topic {  topic } --execute"
},
{
"title": "Kong gateway",
"pageID": "462065054",
"pageLink": "/display/GMDM/Kong+gateway",
"content": ""
},
{
"title": "Kong gateway migration",
"pageID": "462065057",
"pageLink": "/display/GMDM/Kong+gateway+migration",
"content": "Installation procedureDeploy crds\n# Download package with crds to current directory\ntar -xzf crds_to_deploy.tar.gzcd crds_to_deploy/\nbase=$(pwd)\nBackup olds crds\n# Switch to proper k8s context\nkubectx atp-mdmhub-nprod-apac\n\n# Get all crds from cluster and saves them into file ${crd_name}_${env}.yaml\n# Args:\n# $1 = env\ncd $base\nmkdir old_apac_nprod\ncd old_apac_nprod\nget_crds.sh apac_nprod\n\n\ncreate new crds\ncd $base/new/splitted/\n# create new crds\nfor i in $(ls); do echo $i; kubectl create -f $i ; done\n# apply new crds\nfor i in $(ls); do echo $i; kubectl apply -f $i ; done\n# replace crds that were not properly installed \nfor i in   kic-crds.yaml01 kic-crds.yaml03 kic-crds.yaml05 kic-crds.yaml07 kic-crds.yaml10 kic-crds.yaml11; do echo $i ; kubectl replace -f $i; done\nApply new version of gatewayconfigrations \ncd $base/new\nkubectl replace -f gatewayconfiguration-new.yaml\nApply old version of kongingress\ncd $base/old\nkubectl replace -f kongingresses.configuration.konghq.com.yaml\n# Performing tests is advised to check if everything is workingDeploy operators with version that have kong-gateway-operator(4.32.0 or newer)# Performing tests is advised to check if everything is workingMerge configurationhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/pull-requests/1967/overviewDeploy backend (4.33.0-project-boldmove-SNAPSHOT or newer)# Performing tests is advised to check if everything is workingDeploy mdmhub components (4.33.0-project-boldmove-SNAPSHOT or newer)# Performing tests is advised to check if everything is workingTestsChecking all ingresses\n# Change /etc/hosts if dns's are not yet changed. To obtain all hosts that should be modified in /etc/hosts: \n# Switch to correct k8s context\n# k get ingresses -o custom-columns=host0:.spec.rules[0].host -A | tail -n +2 | sort | uniq | tr '\\n' ' '\n# To get dataplane svc: \n# k get svc -n kong -l gateway-operator.konghq.com/dataplane-service-type=ingress\nendpoints=$(kubectl get ingress -A -o custom-columns="NAME:.metadata.name,HOST:.spec.rules[0].host,PATH:.spec.rules[0].http.paths[0].path" | tail -n +2 | awk '{print "https://"$2":443"$3}')\nwhile IFS= read -r line; do echo -e "\\n\\n---- $line ----"; curl -k $line; done <<< $endpoints\nChecking plugins \nexport apikey="xxxxxxxxx"\nexport reltio_authorization="yyyyyyyyy"\nexport consul_token="zzzzzzzzzzz"\n\n\nkey-auth:\n curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev\n curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev -H "apikey: $apikey"\n curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev/entities/2c9cf5a5 -H 'apikey: $apikey'\n\nmdm-external-oauth:\n curl --location --request POST 'https://devfederate.COMPANY.com/as/token.oauth2?grant_type=client_credentials' --header 'Content-Type: application/x-www-form-urlencoded' --header 'Origin: http://10.192.71.136:8000' --header "Authorization: Basic $reltio_authorization" | jq .access_token\n curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/ext-api-gw-emea-dev/entities/2c9cf5a5 --header 'Authorization: Bearer access_token_from_previous_command'\n\ncorrelation-id:\n curl -v https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev/entities/2c9cf5a5 -H "apikey: $apikey" 2>&1 | grep hub-correlation-id \n\nbackend-auth:\n kibana-backend-auth:\n # Web browser \n    https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/home#/\n\nsession:\n # Web browser \n   # Open debugger console in web browser and check if kong cookies are set\n\npre-function:\n k logs -n emea-backend -l app=consul -f --tail=0\n k exec -n airflow airflow-scheduler-0 -- curl -k http://http-mdmhub-kong-kong-proxy.kong.svc.cluster.local:80/v1/kv/dev?token=$consul_token\n\nopentelemetry:\n curl https://api-emea-nprod-gbl-mdm-hub.COMPANY.com/api-emea-dev/entities/testtest -H "apikey: $apikey"\n +\n # Web browser\n https://kibana-emea-nprod-gbl-mdm-hub.COMPANY.com/app/apm/services/kong/overview?comparisonEnabled=true&environment=ENVIRONMENT_ALL&kuery=&latencyAggregationType=avg&offset=1d&rangeFrom=now-15h&rangeTo=now&serviceGroup=&transactionType=request\n\nprometheus:\n k exec -it dataplane-kong-knkcn-bjrc7-75bb85fc4c-2msfv -- /bin/bash\n curl localhost:8100/metrics\n\n\nCheck logsGateway operatorKong operatorOld kong pod - proxy and ingress controllerNew kong dataplaneNew kong controlPlaneStatus of new kong objects: DataplaneControlplaneGateway\nk get Gateway,dataplane,controlplane -n kong\nCheck services in old and new kong Old kong\nservices=$(k exec -n kong mdmhub-kong-kong-f548788cd-27ltl -c proxy -- curl -k https://localhost:8444/services); echo $services | jq .\nNew kong\n services=$(k exec -n kong dataplane-kong-knkcn-bjrc7-5c9f596ff9-t94lf -c proxy -- curl -k https://localhost:8444/services); echo $services | jq .\nReferenceKong operator configurationhttps://github.com/Kong/kong-operator/blob/main/deploy/crds/charts_v1alpha1_kong_cr.yamlKong gateway operator crd's referencehttps://docs.konghq.com/gateway-operator/latest/reference/custom-resources/#dataplanedeploymentoptionsget_crds.shcrds_to_deploy.tar.gz"
},
{
"title": "MongoDB:",
"pageID": "164470061",
"pageLink": "/pages/viewpage.action?pageId=164470061",
"content": ""
},
{
"title": "Mongo-SOP-001: Mongo Scripts",
"pageID": "164470056",
"pageLink": "/display/GMDM/Mongo-SOP-001%3A+Mongo+Scripts",
"content": "Create Mongo Indexes\nhub_errors\n db.hub_errors.createIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\n db.hub_errors.createIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\n db.hub_errors.createIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\n db.hub_errors.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\n\ngateway_errors\n db.gateway_errors.createIndex({plannedResubmissionDate: -1}, {background: true, name: "idx_plannedResubmissionDate_-1"});\n db.gateway_errors.createIndex({timestamp: -1}, {background: true, name: "idx_timestamp_-1"});\n db.gateway_errors.createIndex({exceptionClass: 1}, {background: true, name: "idx_exceptionClass_1"});\n db.gateway_errors.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n\n\ngateway_transactions\n db.gateway_transactions.createIndex({transactionTS: -1}, {background: true, name: "idx_transactionTS_-1"});\n db.gateway_transactions.createIndex({status: -1}, {background: true, name: "idx_status_-1"});\n db.gateway_transactions.createIndex({requestId: -1}, {background: true, name: "idx_requestId_-1"});\n db.gateway_transactions.createIndex({username: -1}, {background: true, name: "idx_username_-1"});\n\n\nentityHistory\n db.entityHistory.createIndex({country: -1}, {background: true, name: "idx_country"});\n db.entityHistory.createIndex({sources: -1}, {background: true, name: "idx_sources"});\n db.entityHistory.createIndex({entityType: -1}, {background: true, name: "idx_entityType"});\n db.entityHistory.createIndex({status: -1}, {background: true, name: "idx_status"});\n db.entityHistory.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n db.entityHistory.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n db.entityHistory.createIndex({"entity.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"});\n db.entityHistory.createIndex({"entity.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});\n db.entityHistory.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\n\n\nentityRelations\n db.entityRelations.createIndex({country: -1}, {background: true, name: "idx_country"});\n db.entityRelations.createIndex({sources: -1}, {background: true, name: "idx_sources"});\n db.entityRelations.createIndex({entityType: -1}, {background: true, name: "idx_relationType"});\n db.entityRelations.createIndex({status: -1}, {background: true, name: "idx_status"});\n db.entityRelations.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"});\n db.entityRelations.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"});\n db.entityRelations.createIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"});\n db.entityRelations.createIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"});\n db.entityRelations.createIndex.({"relation.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"}); \n db.entityRelations.createIndex.({"relation.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"}); \n db.entityHistory.createIndex({forceModificationDate: -1}, {background: true, name: "idx_forceModificationDate"});\n\n\n\n\n\n\nFind ACTIVE relations connected to inactive Entities\nvar start = new Date().getTime();\n\nvar result = db.getCollection("entityRelations").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { \n\t\t\t "status" : "ACTIVE"\n\t\t\t}\n\t\t},\n\n//\t\t// Stage 2\n//\t\t{\n//\t\t\t$limit: 1000\n//\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$lookup: // Equality Match\n\t\t\t{\n\t\t\t from: "entityHistory",\n\t\t\t localField: "relation.endObject.objectURI",\n\t\t\t foreignField: "_id",\n\t\t\t as: "matched_entity"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$match: {\n\t\t\t "$or" : [\n\t\t\t {\n\t\t\t "matched_entity.status" : "INACTIVE"\n\t\t\t }, \n\t\t\t {\n\t\t\t "matched_entity.status" : "LOST_MERGE"\n\t\t\t },\n\t\t\t {\n\t\t\t "matched_entity.status" : "DELETED"\n\t\t\t } \n\t\t\t ]\n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$group: {\n\t\t\t\t\t\t _id:"$matched_entity.status", \n\t\t\t\t\t\t count:{$sum:1}, \n\t\t\t}\n\t\t},\n\n\t]\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\n \t\nprintjson(result._batch) \t\n\nvar end = new Date().getTime();\nvar duration = end - start;\nprint("duration: " + duration + " ms")\nprint("END")\nFix LOST_MERGE entities with wrong parentEntityId\nprint("START")\nvar start = new Date().getTime();\n\nvar result = db.getCollection("entityHistory").aggregate(\n // Pipeline\n [\n // Stage 1\n {\n $match: {\n "status" : "LOST_MERGE",\n "$and" : [\n {\n "$or" : [\n {\n "mdmSource" : "RELTIO"\n },\n {\n "mdmSource" : {\n "$exists" : false\n }\n }\n ]\n }\n ]\n }\n },\n\n // Stage 2\n {\n $graphLookup: {\n "from" : "entityHistory",\n "startWith" : "$_id",\n "connectFromField" : "parentEntityId",\n "connectToField" : "_id",\n "as" : "master",\n "maxDepth" : 10.0,\n "depthField" : "depthField"\n }\n },\n\n // Stage 3\n {\n $unwind: {\n "path" : "$master",\n "includeArrayIndex" : "arrayIndex",\n "preserveNullAndEmptyArrays" : false\n }\n },\n\n // Stage 4\n {\n $match: {\n "master.status" : {\n "$ne" : "LOST_MERGE"\n }\n }\n },\n\n // Stage 5\n {\n $redact: {\n "$cond" : {\n "if" : {\n "$ne" : [\n "$master._id",\n "$parentEntityId"\n ]\n },\n "then" : "$$KEEP",\n "else" : "$$PRUNE"\n }\n }\n },\n\n ]\n\n // Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\nresult.forEach(function(obj) {\n var id = obj._id;\n var masterId = obj.master._id;\n\n if( masterId !== undefined){\n\n print( id + " " + " " + obj.parentEntityId +" replaced to "+ masterId);\n var currentTime = new Date().getTime();\n\n var result = db.entityHistory.update( {"_id":id}, {$set: { "parentEntityId":masterId, "forceModificationDate": NumberLong(currentTime) } });\n printjson(result);\n }\n\n});\n\n\nvar end = new Date().getTime();\nvar duration = end - start;\nprint("duration: " + duration + " ms")\nprint("END")\n\n\n\nFind entities based on the FILE with the crosswalks\ndb = db.getSiblingDB('reltio')\nvar file = cat('crosswalks.txt'); // read the crosswalks file\nvar crosswalk_ids = file.split('\\n'); // create an array of crosswalks\nfor (var i = 0, l = crosswalk_ids.length; i < l; i++){ // for every crosswalk search it in the entityHistory\n print("ID crosswalk: " + crosswalk_ids[i])\n var result = db.entityHistory.find({\n status: { $eq: "ACTIVE" },\n "entity.crosswalks.value": crosswalk_ids[i]\n }).projection({id:1, country:1})\n printjson(result.toArray());\n}\nFind ACTIVE entities with duplicated crosswalk - missing or wrong LOST_MERGE event\ndb.getCollection("entityHistory").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { status: { $eq: "ACTIVE" }, entityType:"configuration/entityTypes/HCP" , mdmSource: "RELTIO", "lastModificationDate" : {\n\t\t\t "$gte" : NumberLong(1529966574477)\n\t\t\t } }\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$project: { _id: 0, "entity.crosswalks": 1,"entity.uri":2, "entity.updatedTime":3 }\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$unwind: "$entity.crosswalks"\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$group: {_id:"$entity.crosswalks.value", count:{$sum:1}, entities:{$push: {uri:"$entity.uri", modificationTime:"$entity.updatedTime"}}}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$match: { count: { $gte: 2 } }\n\t\t},\n\n\t\t// Stage 6\n\t\t{\n\t\t\t$redact: {\n\t\t\t "$cond" : {\n\t\t\t "if" : {\n\t\t\t "$ne" : [\n\t\t\t "$entity.crosswalks.0.value", \n\t\t\t "$entity.crosswalks.1.value"\n\t\t\t ]\n\t\t\t }, \n\t\t\t "then" : "$$KEEP", \n\t\t\t "else" : "$$PRUNE"\n\t\t\t }\n\t\t\t}\n\t\t},\n\t],\n\n\t// Options\n\t{\n\t\tallowDiskUse: true\n\t}\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\n\nFix LOST_MEREGE entities with missing entityType attribute\nprint("START")\nvar start = new Date().getTime();\n\nvar result = db.getCollection("entityHistory").aggregate(\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: {\n\t\t\t "status" : "LOST_MERGE", \n\t\t\t "entityType" : {\n\t\t\t "$exists" : false\n\t\t\t }, \n\t\t\t "$and" : [\n\t\t\t {\n\t\t\t "$or" : [\n\t\t\t {\n\t\t\t "mdmSource" : "RELTIO"\n\t\t\t }, \n\t\t\t {\n\t\t\t "mdmSource" : {\n\t\t\t "$exists" : false\n\t\t\t }\n\t\t\t }\n\t\t\t ]\n\t\t\t }\n\t\t\t ]\n\t\t\t}\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$graphLookup: {\n\t\t\t "from" : "entityHistory", \n\t\t\t "startWith" : "$_id", \n\t\t\t "connectFromField" : "parentEntityId", \n\t\t\t "connectToField" : "_id", \n\t\t\t "as" : "master", \n\t\t\t "maxDepth" : 10.0, \n\t\t\t "depthField" : "depthField"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$unwind: {\n\t\t\t "path" : "$master", \n\t\t\t "includeArrayIndex" : "arrayIndex", \n\t\t\t "preserveNullAndEmptyArrays" : false\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$match: {\n\t\t\t "master.status" : {\n\t\t\t "$ne" : "LOST_MERGE"\n\t\t\t }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$redact: {\n\t\t\t "$cond" : {\n\t\t\t "if" : {\n\t\t\t "$eq" : [\n\t\t\t "$master._id", \n\t\t\t "$parentEntityId"\n\t\t\t ]\n\t\t\t }, \n\t\t\t "then" : "$$KEEP", \n\t\t\t "else" : "$$PRUNE"\n\t\t\t }\n\t\t\t}\n\t\t}\n\t]\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n);\n\n\t\nresult.forEach(function(obj) {\n var id = obj._id;\n\n var masterEntityType = obj.master.entityType;\n\t\n\tif( masterEntityType !== undefined){\n if(obj.entityType == undefined){\n\t print("entityType is " + obj.entityType + " for " + id +", changing to "+ masterEntityType);\n\t var currentTime = new Date().getTime();\n\t\n var result = db.entityHistory.update( {"_id":id}, {$set: { "entityType":masterEntityType, "lastModificationDate": NumberLong(currentTime) } });\n printjson(result);\n }\n\t}\n\n});\n \t\n \t\nvar end = new Date().getTime();\nvar duration = end - start;\nprint("duration: " + duration + " ms")\nprint("END")\nGenerate report from gateway_transaction (US)\ndb.getCollection("gateway_transactions").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { \n\t\t\t "$and" : [\n\t\t\t {\n\t\t\t "transactionTS" : {\n\t\t\t "$gte" : NumberLong(1551974500000)\n\t\t\t }, \n\t\t\t "username" : "dea_batch"\n\t\t\t }\n\t\t\t ]\n\t\t\t}\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$group: {\n\t\t\t _id:"$requestId", \n\t\t\t count: { $sum:1 },\n\t\t\t transactions: { $push : "$$ROOT" }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$unwind: {\n\t\t\t path : "$transactions",\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$addFields: {\n\t\t\t \n\t\t\t "statusNumber": { \n\t\t\t $cond: { \n\t\t\t if: { \n\t\t\t $eq: ["$transactions.status", "failed"] \n\t\t\t }, \n\t\t\t then: 0, \n\t\t\t else: 1 \n\t\t\t }\n\t\t\t } \n\t\t\t \n\t\t\t \n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$sort: {\n\t\t\t "transactions.requestId": 1, \n\t\t\t "statusNumber": -1,\n\t\t\t "transactions.transactionTS": -1 \n\t\t\t}\n\t\t},\n\n\t\t// Stage 6\n\t\t{\n\t\t\t$group: {\n\t\t\t _id:"$_id", \n\t\t\t transaction: { "$first": "$$CURRENT" }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 7\n\t\t{\n\t\t\t$addFields: {\n\t\t\t "transaction.transactions.count": "$transaction.count" \n\t\t\t}\n\t\t},\n\n\t\t// Stage 8\n\t\t{\n\t\t\t$replaceRoot: {\n\t\t\t newRoot: "$transaction.transactions"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 9\n\t\t{\n\t\t\t$addFields: {\n\t\t\t "file_raw_line": "$metadata.file_raw_line",\n\t\t\t "filename": "$metadata.filename"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 10\n\t\t{\n\t\t\t$project: {\n\t\t\t requestId : 1,\n\t\t\t count: 2,\n\t\t\t "filename": 3,\n\t\t\t uri: "$mdmUri",\n\t\t\t country: 5,\n\t\t\t source: 6,\n\t\t\t crosswalkId: 7,\n\t\t\t status: 8,\n\t\t\t timestamp: "$transactionTS",\n\t\t\t //"file_raw_line": 10,\n\t\t\t\n\t\t\t}\n\t\t},\n\t],\n\n\t// Options\n\t{\n\t\tallowDiskUse: true\n\t}\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\n\nExport Config for Studio3T - format:<ExportSettings> <VERSION>1</VERSION> <exportSource>CURRENT_QUERY_RESULT</exportSource> <skipValue>0</skipValue> <limitValue>0</limitValue> <exportFormat>CSV</exportFormat> <exportOptions> <VERSION>2</VERSION> <emptyFieldImportStrategy>MAKE_NULL</emptyFieldImportStrategy> <delimiter> </delimiter> <encapsulator>&quot;</encapsulator> <isEscapeControlChars>false</isEscapeControlChars> <exportNullFieldsAsEmptyStrings>true</exportNullFieldsAsEmptyStrings> <isAddColHeaders>true</isAddColHeaders> <selectedFields> <string>_id</string> <string>count</string> <string>country</string> <string>crosswalkId</string> <string>filename</string> <string>requestId</string> <string>source</string> <string>status</string> <string>timestamp</string> <string>uri</string> </selectedFields> <noArrays>false</noArrays> <noNestedFields>false</noNestedFields> <noHeader>false</noHeader> <skipLines>0</skipLines> <parseError>false</parseError> <trimLeadingSpaces>false</trimLeadingSpaces> <trimTrailingSpaces>false</trimTrailingSpaces> <isUnixLF>false</isUnixLF> <csvPreset>Excel</csvPreset> </exportOptions> <selectedFields> <string>_id</string> <string>count</string> <string>country</string> <string>crosswalkId</string> <string>filename</string> <string>requestId</string> <string>source</string> <string>status</string> <string>timestamp</string> <string>uri</string> </selectedFields> <exportTargetType>FILE</exportTargetType> <exportPath>D:\\docs\\FLEX\\REPORT_transaction_log\\10_10_2018\\load_report.csv</exportPath> <noCursorTimeout>true</noCursorTimeout></ExportSettings>Find entities and GROUP BY country\n db.entityHistory.aggregate([\n {$match: { status: { $eq: "ACTIVE" }, entityType:"configuration/entityTypes/HCP" } },\n {$project: { _id: 1, "country":1 } },\n {$group : {_id:"$country", count:{$sum:1},}},\n {$match: { count: { $gte: 2 } } },\n],{ allowDiskUse: true } )\nFind Entities where ALL/ANY of the crosswalks array objects has delete date set\n//https://stackoverflow.com/questions/43778747/check-if-a-field-exists-in-all-the-elements-of-an-array-in-mongodb-and-return-th?rq=1\n\n// find entities where ALL crosswalk array objects has delete date set (not + exists false)\ndb.entityHistory.find({\n entityType: "configuration/entityTypes/HCP",\n country: "br",\n status: "ACTIVE",\n "entity.crosswalks": { $not: { $elemMatch: { deleteDate: {$exists:false} } } }\n})\n\n// find entities where ANY OF crosswalk array objecst has delete date set\ndb.entityHistory.find({\n entityType: "configuration/entityTypes/HCP",\n country: "br",\n status: "ACTIVE",\n "entity.crosswalks": { $elemMatch: { deleteDate: {$exists:true} } }\n})\nExample of Multiple Update based on the search query\ndb.getCollection("entityHistory").update(\n { \n "status" : "LOST_MERGE", \n "entity" : {\n "$exists" : true\n }\n },\n { \n $set: { "lastModificationDate": NumberLong(1551433013000) }, \n $unset: {entity:""}\n },\n { multi: true }\n)\n\n\n\nGroup RDM exceptions and get details with sample entities ids\n// Stages that have been excluded from the aggregation pipeline query\n__3tsoftwarelabs_disabled_aggregation_stages = [\n\n\t{\n\t\t// Stage 2 - excluded\n\t\tstage: 2, source: {\n\t\t\t$limit: 1000\n\t\t}\n\t},\n]\n\ndb.getCollection("hub_errors").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: {\n\t\t\t "exceptionClass" : "com.COMPANY.publishinghub.processing.RDMMissingEventForwardedException",\n\t\t\t "status" : "NEW"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$project: { \n\t\t\t "entityId":"$exchangeInHeaders.kafka[dot]KEY",\n\t\t\t "attributeName": "$exceptionDetails.attributeName",\n\t\t\t "attributeValue": "$exceptionDetails.attributeValue", \n\t\t\t "errorCode": "$exceptionDetails.errorCode"\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$group: {\n\t\t\t _id: { entityId:"$entityId", attributeValue: "$attributeValue",attributeName:"$attributeName"}, // can be grouped on multiple properties \n\t\t\t dups: { "$addToSet": "$_id" }, \n\t\t\t count: { "$sum": 1 } \n\t\t\t}\n\t\t},\n\n\t\t// Stage 5\n\t\t{\n\t\t\t$group: {\n\t\t\t //_id: { attributeValue: "$_id.attributeValue",attributeName:"$_id.attributeName"}, // can be grouped on multiple properties \n\t\t\t _id: { attributeName:"$_id.attributeName"}, // can be grouped on multiple properties \n\t\t\t entities: { "$addToSet": "$_id.entityId" }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 6\n\t\t{\n\t\t\t$project: {\n\t\t\t _id: 1,\n\t\t\t sample_entities: { $slice: [ "$entities", 10 ] } \n\t\t\t affected_entities_count: { $size: "$entities" } \n\t\t\t}\n\t\t},\n\t],\n\n\t// Options\n\t{\n\t\tallowDiskUse: true\n\t}\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\n\nMongo SIMPLE searches/filter/lengs/regexp examples\n// GET\ndb.entityHistory.find({})\n// GET random 20 entities\ndb.entityHistory.aggregate( \n [ \n { $match : { status : "ACTIVE" } },\n { \n $sample: {size: 20} \n }, \n {\n $project: {_id:1}\n },\n\n] )\n \n// entity get by ID\ndb.entityHistory.find({\n"_id":"entities/rOATtJD"\n})\n\n\ndb.entityHistory_PforceRx.find({\n _id: "entities/Tq4c32l"\n})\n\n// Specialities exists\ndb.entityHistory.find({\n "entity.attributes.Specialities": {\n $exists: true\n }\n}).limit(20)\n\n// Specialities size > 4\ndb.entityHistory.find({\n "entity.attributes.Specialities": {\n $exists: true\n },\n $and: [\n {$where: "this.entity.attributes.Specialities.length > 6"}, \n {$where: "this.sources.length >= 2"},\n ]\n\n})\n.limit(10)\n// only project ID\n.projection({id:1})\n\n\n// Address size > 4\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n $and: [\n {$where: "this.entity.attributes.Address.length > 4"}, \n {$where: "this.sources.length > 2"},\n ]\n\n})\n.limit(10)\n// only project ID\n//.projection({id:1})\n\n\n// Address AddressType size 2\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n "entity.attributes.Address.value.Status.lookupCode": {\n $exists: true,\n $eq: "ACTV"\n },\n }, {\n "entity.attributes.Address.value.Status": 1\n })\n .limit(10)\n\n\n// Address AddressType size 2\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n $and: [\n {$where: "this.entity.attributes.Address.length >= 4"}, \n {$where: "this.sources.length >= 4"},\n ]\n\n})\n.limit(2)\n//.projection({id:1})\n// only project ID\n\n\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n "entity.attributes.Address.value.BestRecord": {\n $exists: true\n }\n})\n.limit(2)\n// only project ID\n//.projection({id:1})\n\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n "entity.attributes.Address.value.ValidationStatus": {\n $exists: true\n },\n "entityType":"configuration/entityTypes/HCO",\n $and: [{\n $where: "this.entity.attributes.Address.length > 4"\n \n }]\n })\n .limit(1)\n// only project ID\n//.projection({id:1})\n\n\n\n//SOURCE NAME\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n lastModificationDate: {\n $gt: 1534850405000\n }\n })\n .limit(10)\n// only project\n\n\n\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n "entity.attributes.Address.refRelation.objectURI": {\n $exists: false\n },\n }).limit(10)\n// only project\n\n\n// Phone exists\ndb.entityHistory.find({\n "entity.attributes.Phone": {\n $exists: true\n }\n}) .limit(1)\n\n//Specialities exists\ndb.entityHistory.find({\n "entity.attributes.Specialities": {\n $exists: true\n },\n country: "mx"\n}).limit(10)\n \n// Speclaity Code\ndb.entityHistory.find({\n "entity.attributes.Specialities": {\n $exists: true\n },\n "entity.attributes.Specialities.value.Specialty.lookupCode": "WMX.TE",\n country: "mx"\n}).limit(1)\n \n// entity.attributes. Identifiers License exists\ndb.entityHistory.find({\n "entity.attributes.Identifiers": {\n $exists: true\n },\n country: "mx"\n}).limit(1)\n \n \n// Name of organization is empty\ndb.entityHistory.find({\n entityType: "configuration/entityTypes/HCO",\n "entity.attributes.Name": {\n $exists: false\n },\n // "parentEntityId": {\n // $exists: false\n // },\n country: "mx"\n}).limit(10)\n\n\n\n\n// RELACJE\n// GET\ndb.entityRelations.find({})\n\n// entity get by ID startObjectID\ndb.entityRelations.find({\n startObjectId: "entities/14tDdkhy"\n})\n\ndb.entityRelations.find({\n endObjectId: "entities/14tDdkhy"\n})\n\n\ndb.entityRelations.find({\n _id: "relations/RJx9ZkM"\n})\n\ndb.entityRelations.find({\n "relation.attributes.ActPhone": {\n $exists: true\n }\n}).limit(1)\n\n\n\n// Address size > 4\ndb.entityRelations.find({\n "relation.attributes.Phone": {\n $exists: true\n },\n "relationType":"configuration/relationTypes/HasAddress",\n //$and: [\n// {$where: "this.relation.attributes.Address.length > 3"}, \n //{$where: "this.sources.length >= 2"},\n //]\n\n})\n.limit(10)\n// only project ID\n//.projection({id:1})\n\n\n\n\n// \ndb.entityRelations.find({\n "relation.crosswalks": {\n $exists: true\n },\n "relation.crosswalks.deleteDate": {\n $exists: true\n }\n\n})\n.limit(10)\n// only project ID\n//.projection({id:1})\n\n\ndb.entityRelations.find({\n "relation.startObject": {\n $exists: true\n },\n "relation.startObject.objectURI": {\n $exists: false\n }\n\n})\n.limit(1)\n\n\n\n// merge finder\ndb.entityRelations.find({\n "relation.startObject": {\n $exists: true\n },\n "relation.endObject": {\n $exists: true\n },\n $and: [\n {$where: "this.relation.startObject.crosswalks.length > 2"}, \n {$where: "this.sources.length >= 1"},\n ]\n\n})\n.limit(10)\n// only project ID\n//.projection({id:1})\n\n\n// merge finder\ndb.entityRelations.find({\n "relation.startObject": {\n $exists: true\n },\n "relation.endObject": {\n $exists: true\n },\n //"relation.startObject.crosswalks.0.uri": mb.regex.startsWith("relation.startObject.objectURI")\n "relation.startObject.crosswalks.0.uri": /^relation.startObject.objectURI.*$/i\n})\n.limit(2)\n\n\n\n\n\n// Phone - HasAddress\ndb.entityRelations.find({\n "relation.attributes.Phone": {\n $exists: true\n },\n "relationType":"configuration/relationTypes/HasAddress",\n})\n.limit(10)\n\n// ActPhone - Activity\ndb.entityRelations.find({\n "relation.attributes.ActPhone": {\n $exists: true\n },\n "relationType":"configuration/relationTypes/Activity",\n})\n\n\n// Identifiers - HasAddress\ndb.entityRelations.find({\n "relation.attributes.Identifiers": {\n $exists: true\n },\n "relationType":"configuration/relationTypes/HasAddress",\n})\n.limit(10)\n\n\n// Identifiers - Activity\ndb.entityRelations.find({\n "relation.attributes.ActIdentifiers": {\n $exists: true\n },\n "relationType":"configuration/relationTypes/Activity",\n})\n\n\n\n\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n }\n })\n// only project\n\n\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n "entity.attributes.Address.refRelation.uri": {\n $exists: false\n },\n "entity.attributes.Address.refRelation.objectURI": {\n $exists: true\n },\n })\n// only project\n\n\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n "entity.attributes.Address.refRelation.uri": {\n $exists: true\n },\n "entity.attributes.Address.refRelation.objectURI": {\n $exists: false\n }\n })\n// only project\n\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n "entity.attributes.Address.refRelation.uri": {\n $exists: true\n },\n "entity.attributes.Address.refRelation.objectURI": {\n $exists: true\n },\n })\n\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n lastModificationDate: {\n $gt: 1534850405000\n }\n })\n .limit(10)\n// only project\n\ndb.entityHistory.find({})\n// GET random 20 entities\n\n \n// entity get by ID\ndb.entityHistory.find({\n _id: "entities/Nzn07bq"\n})\n\n\n// Address AddressType size 2\ndb.entityHistory.find({\n "entity.attributes.Address": {\n $exists: true\n },\n $and: [\n {$where: "this.entity.attributes.Address.length >= 4"}, \n {$where: "this.sources.length >= 4"},\n ]\n\n})\n.limit(2)\n\n\n\n\nGet the EntityId and the Crosswalks Size - ifNull return 0 elements\ndb.getCollection("entityHistory").aggregate(\n\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: { \t\n\t\t\t mdmSource: "RELTIO" \n\t\t\t}\n\t\t},\n\n\t\t// Stage 2\n\t\t{\n\t\t\t$limit: 1000\n\t\t},\n\n\t\t// Stage 3\n\t\t{\n\t\t\t$addFields: {\n\t\t\t "crosswalksSize": { $size: { "$ifNull": [ "$entity.crosswalks", [] ] } }\n\t\t\t}\n\t\t},\n\n\t\t// Stage 4\n\t\t{\n\t\t\t$project: {\n\t\t\t _id: 1,\n\t\t\t crosswalksSize:1 \n\t\t\t \n\t\t\t}\n\t\t},\n\n\t]\n\n\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n\n\nTMP Copy\n// COPY THIS SECTION \n"
},
{
"title": "Mongo-SOP-002: Running mongo scripts remotely on k8s cluster",
"pageID": "284809016",
"pageLink": "/display/GMDM/Mongo-SOP-002%3A+Running+mongo+scripts+remotely+on+k8s+cluster",
"content": "Get the tool:Go to file http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/helm/mongo/src/scripts/run_mongo_remote/run_mongo_remote.sh?at=refs%2Fheads%2Fproject%2Fboldmove in inbound-services repository.Download the file to your computer.The tool requires kubenetes installed and WSL (tested on WSL2) for working correctly.Usage guide:Available commands:./run_mongo_remote.sh --helpShows general help message for the script tool:./run_mongo_remote.sh exec <ARGS>Execute to run script remotely on pod agent on k8s script. Script will be copied from the given path on local machine to pod and then run on pod. To get details about accepted arguments run ./run_mongo_remote.sh exec --help./run_mongo_remote.sh get <ARGS>Execute to download script results from pod agent and save in given path on your local machine. To get details about accepted arguments run ./run_mongo_remote.sh get --helpExample flow:Save mongo script you want to run in file example_script.js (Script file has to have .js or .mongo extension for tool to run correctly)Run ./run_mongo_remote.sh exec example_script.js emea_dev to run your script on emea_dev environmentUpon complection the path where the script results were saved on pod agent will be returned (eg. /pod/path/result.txt)Run ./run_mongo_remote.sh get /pod/path/result.txt local/machine/path/example_script_result.txt emea_dev to save script results on your local machine.Tool editionThe tool was written using bashly - a bash framework for developing CLI applications.The tool source is available HERE. Edit files and generate singular output script based on guides available on bashly site.DO NOT EDIT run_mongo_remote.sh file MANUALLY (it may result in script not working correctly)."
},
{
"title": "Notifications:",
"pageID": "430347505",
"pageLink": "/pages/viewpage.action?pageId=430347505",
"content": ""
},
{
"title": "Sending notification",
"pageID": "430347508",
"pageLink": "/display/GMDM/Sending+notification",
"content": "We send notifications to our clients in the case of the following events:Unplanned outage - MDMHUB is not available for our clients - REST API, Kafka or Snowflake doesn't work properly and clients are not able to connect. Currently, you have to send notification in the case of the following events:kong_http_500_status_prodkong_http_502_status_prodkong_http_503_status_prodkong3_http_500_status_prodkong3_http_502_status_prodkong3_http_503_status_prodkafka_missing_all_brokers_prodPlanned outage - it is maintenance window when we have to do some maintenance tasks that will cause temporary problems with accessing to MDMHUB endpoints,Update configuration - some of MDMHUB endpoints are changed i.e.: rest API URL address, Kafka address etc.We always sends notification in the case of unplanned outage to inform our clients about and let them know that somebody from us is working on issue. Planned outage and update configuration are always planned activity that are confirmed with release management and scheduled to specific time range.Notification LayoutYou send notifications using your COMPANY's email account.As CC always set our DLs: DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com, DL-ATP_MDMHUB_SUPPORT@COMPANY.comAdd our clients as BCC according to table mentioned below:Click here to expand Recepients list (XLS above is easier to filter){"name":"MDM_Hub_notification_recipients.xlsx","type":"xlsx","pageID":"430347508"}Loading On the above screen we can see a few placeholders,Notification type - must be one of: UNPLANNED OUTAGE, PLANNED OUTAGE or UPDATE CONFIGURATION,Environments - a list of MDMHUB environments that related to notification. It is very important to provide region and specific environment type eg. AMER DEV/QA/STAGE, AMER NPRODs etc. It is good to provide a links to documentation that describe listed environments. Environment documentation can be found here,When - the date when situation that notification describes start occurring. In the case of unplanned outage you have to provide the date when we noticed the failure. For rest of situations it should be time range to determine when activity will start and finish,Description - details that describe situation, possible impacts and expected time of resolution (if it is possible to determine). Some of the notification templates have placeholder "<List of endpoints>" that should be fill up using labels endpoint and endpoint_ext value from alert triggered in karma. Thanks this, customers will be able to recognize that outage impacting on theirs business.Notification templatesBelow you can find notification templates that you can get, fill and send to our clients:Generic template: notification.msgKafka issues: kafka.msgAPI issues: api.msg"
},
{
"title": "COMPANYGlobalCustomerID:",
"pageID": "302706348",
"pageLink": "/pages/viewpage.action?pageId=302706348",
"content": ""
},
{
"title": "Fix \"\" or null IDs - Fix Duplicates",
"pageID": "250675882",
"pageLink": "/pages/viewpage.action?pageId=250675882",
"content": "The following SOP describes how to fix "" or null COMPANYGlobalCustomerIDs values in Mongo and regenerate events in Snowflake.The SOP also contains the step to fix duplicated values and regenerate events.Steps: Check empty or null: \n\t db = db.getSiblingDB("reltio_amer-prod");\n\t\tdb.getCollection("entityHistory").find(\n\t\t\t{\n\t\t\t\t"$or" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t"COMPANYGlobalCustomerID" : ""\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t"COMPANYGlobalCustomerID" : {\n\t\t\t\t\t\t\t"$exists" : false\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t"status" : {\n\t\t\t\t\t"$ne" : "DELETED"\n\t\t\t\t}\n\t\t\t}\n\t\t);\nMark all ids for further event regeneration. Run the Scritp on Studio3t or K8s mongoScript - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/docker/mongo_utils/scripts/COMPANYglobalcustomerids_fix_empty_null_script.jsRun on K8s:log in to correct cluster on backend namespace copy script - kubectl cp  ./reload_entities_fix_COMPANY_id_DEV.js mongo-0:/tmp/reload_entities_fix_COMPANY_id_DEV.jsrun - nohup mongo --host mongo/localhost:27017 -u admin -p <pass> --authenticationDatabase admin reload_entities_fix_COMPANY_id_DEV.js > out/reload_DEV.out 2>&1 &download result - kubectl cp mongo-0:/tmp/out/reload_DEV.out ./reload_DEV.outUsing output find all "TODO" lines and regenerate correct eventsCheck duplicates:\n\t\t\t\t// Pipeline\n\t\t\t[\n\t\t\t\t// Stage 1\n\t\t\t\t{\n\t\t\t\t\t$group: {\n\t\t\t\t\t_id: {COMPANYID: "$COMPANYID"},\n\t\t\t\t\tuniqueIds: {$addToSet: "$_id"},\n\t\t\t\t\tcount: {$sum: 1}\n\t\t\t\t\t}\n\t\t\t\t},\n\n\t\t\t\t// Stage 2\n\t\t\t\t{\n\t\t\t\t\t$match: { \n\t\t\t\t\tcount: {"$gt": 1}\n\t\t\t\t\t}\n\t\t\t\t}, \n\t\t\t],\n\n\t\t\t// Options\n\t\t\t{\n\t\t\t\tallowDiskUse: true\n\t\t\t}\n\n\t\t\t// Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\nIf there are duplicates run run the Scritp on Studio3t or K8s mongoScript - http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/docker/mongo_utils/scripts/COMPANYglobalcustomerids_fix_duplicates_script.jsRun on K8s:log in to correct cluster on backend namespace copy script - kubectl cp  ./reload_entities_fix_COMPANY_id_DEV.js mongo-0:/tmp/reload_entities_fix_COMPANY_id_DEV.jsrun - nohup mongo --host mongo/localhost:27017 -u admin -p <pass> --authenticationDatabase admin reload_entities_fix_COMPANY_id_DEV.js > out/reload_DEV.out 2>&1 &download result - kubectl cp mongo-0:/tmp/out/reload_DEV.out ./reload_DEV.outUsing output find all "TODO" lines and regenerate correct eventsReload events    Events RUNYou can use the following 2 scripts:\n#!/bin/bash\n\nfile=$1\nevent_type=$2\n\ndos2unix $file\n\njq -R -s -c 'split("\\n")' < "${file}" | jq --arg eventTimeArg `date +%s%3N` --arg eventType ${event_type} -r '.[] | . +"|{\\"eventType\\": \\"\\($eventType)\\", \\"eventTime\\": \\"\\($eventTimeArg)\\", \\"entityModificationTime\\": \\"\\($eventTimeArg)\\", \\"entitiesURIs\\": [\\"" + (.|tostring) + "\\"], \\"mdmSource\\": \\"RELTIO\\", \\"viewName\\": \\"default\\"}"'\n\n\nThis script input is the file with entityid separated by new lineExmaple:entities/xVIK0nhentities/uP4eLwsentities/iiKryQOentities/ZYjRCFNentities/13n4v93AExample execution:./script.sh dev_reload_empty_ids.csv HCP_CHANGED >> EMEA_DEV_events.txtOR\n#!/bin/bash\n\nfile=$1\n\ndos2unix $file\n\njq -R -s -c 'split("\\n")' < "${file}" | jq --arg eventTimeArg `date +%s%3N` -r '.[] | (. | tostring | split(",") | .[0] | tostring ) +"|{\\"eventType\\": \\""+ ( . | tostring | split(",") | if .[1] == "LOST_MERGE" then "HCP_LOST_MERGE" else "HCP_CHANGED" end ) + "\\", \\"eventTime\\": \\"\\($eventTimeArg)\\", \\"entityModificationTime\\": \\"\\($eventTimeArg)\\", \\"entitiesURIs\\": [\\"" + (. | tostring | split(",") | .[0] | tostring ) + "\\"], \\"mdmSource\\": \\"RELTIO\\", \\"viewName\\": \\"default\\"}"'\n\n\nThis script input is the file with entityId,status separate by new lineExample:entities/10BBdiHR,LOST_MERGEentities/10BBdv4D,LOST_MERGEentities/10BBe7qz,LOST_MERGEentities/10BBgKFF,INACTIVEentities/10BBgOVV,ACTIVEExample execution:./script_2_columns.sh dev_reload_lost_merges.csv >> EMEA_DEV_events.txtPush the generate file to Kafka topic using Kafka producer:./start_sasl_producer.sh prod-internal-reltio-events < EMEA_PROD_events.txtSnowflake Check\n-- COMPANY COMPANY_GLOBAL_CUSTOMER_ID checks - null/empty\nSELECT count(*) FROM ENTITIES WHERE COMPANY_GLOBAL_CUSTOMER_ID IS NULL OR COMPANY_GLOBAL_CUSTOMER_ID = '' \nSELECT * FROM ENTITIES WHERE COMPANY_GLOBAL_CUSTOMER_ID IS NULL OR COMPANY_GLOBAL_CUSTOMER_ID = '' \n\n-- duplicates\nSELECT COMPANY_GLOBAL_CUSTOMER_ID \nFROM ENTITIES \nWHERE COMPANY_GLOBAL_CUSTOMER_ID IS NOT NULL OR COMPANY_GLOBAL_CUSTOMER_ID != '' \nGROUP BY COMPANY_GLOBAL_CUSTOMER_ID HAVING COUNT(*) >1\n\n\n"
},
{
"title": "Initialization Process",
"pageID": "218694652",
"pageLink": "/display/GMDM/Initialization+Process",
"content": "The process will sync COMPANYGlobalCustomerID attributes to the MongoDB (EntityHistory and COMPANYIDRegistry) and then refresh the snowflake with this data.The process is divided into the following steps:Create an index in Mongodb.entityHistory.createIndex({COMPANYGlobalCustomerID: -1},  {background: true, name:  "idx_COMPANYGlobalCustomerID"});Configure entity-enricher so it has the ov:false option for COMPANYGlobalCustomerIDbundle.nonOvAttributesToInclude:- COMPANYCustID- COMPANYGlobalCustomerIDDeploy the hub components with callback enabled -COMPANYGlobalCustomerIDCallback (3.9.1 version)RUN hub_reconciliation_v2 - first run the HUB Reconciliation -> this will enrich all Mongo data with COMPANYGlobaCustomerID with ov:true and ov:false valuesbased on EMEA this is here - http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_dev&root=doc - HUB Reconciliation Process V2check if the configuration contains the following - nonOvAttrToInclude: "COMPANYCustID,COMPANYGlobalCustomerID"check S3 directory structure and reconciliation.properties file in emea/<env>/inbound/hub/hub_reconciliation/ http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_devhttp://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_qahttp://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_emea_stageRUN hub_COMPANYglobacustomerid_initial_sync_<ENV> DAGIt contains 2 steps:COMPANYglobacustomerid_active_inactive_reconciliation the groovy script that - check the HUB entityHistory ACTIVE/INACTIVE/DELETED entities - for all these entities get ov:true COMPANYGlobalCustomerId and enrich Mongo and CacheCOMPANYglobacustomerid_lost_merge_reconciliation  the groovy script that - this step checks LOST_MERGE entities. Do the merge_tree full export from Reltio. Based on merge_tree adds the RUN snowflake_reconciliation - full snowflake reconciliation by generating the full file with empty checksums"
},
{
"title": "Remove Duplicates and Regenerate Events",
"pageID": "272368703",
"pageLink": "/display/GMDM/Remove+Duplicates+and+Regenerate+Events",
"content": "This SOP describes the workaround to fix the COMPANYGlobalCustomerID duplicated values.Case:There are 2 entities with the same COMPANYGlobalCustomerID.Example:    1Qbu0jBQ - Jun 14, 2022 @ 18:10:44.963    ID-mdmhub-reltio-subscriber-dynamic-866b588c7-w9crm-1655205289718-0-157609    ENTITY_CREATED    entities/1Qbu0jBQ    RELTIO    success    entities/1Qbu0jBQ        3Ot2Cfw  - Aug 11, 2022 @ 18:53:31.433    ID-mdmhub-reltio-subscriber-dynamic-79cd788b59-gtzm6-1659525443436-0-1693016    ENTITY_CREATED    entities/3Ot2Cfw    RELTIO    success    entities/3Ot2Cfw3Ot2Cfw  is a WINNER1Qbu0jBQ  is a LOSER. Rule: if there are duplicates, always pick the LOST_MERGED entity and update the looser only with the different value. Do not change an active entity:Steps:GO to Reltio to the winner and check the other (OV:FALSE) COMPANYGlobalCustomerIDsPick the new value from the list:Check if there are no duplicates in Mongo, and search for a new value by the COMPANY in the cache. If exists pick different.Update Mongo Cache:Regenerate event:if the loser entity is now active in Reltio but not active in Mongo regenerate CREATED event:entities/1Qbu0jBQ|{  "eventType" : "HCP_CREATED",  "eventTime" : "1666090581000",  "entityModificationTime" : "1666090581000",  "entitiesURIs" : [ "entities/1Qbu0jBQ" ],  "mdmSource" : "RELTIO",  "viewName" : "default" }if the loser entity is not present in Reltio because is a looser regenerate LOST_MERGE event:entities/1Q7XLreu|{"eventType":"HCO_LOST_MERGE","eventTime":1666018656000,"entityModificationTime":1666018656000,"entitiesURIs":["entities/1Q7XLreu"],"mdmSource":"RELTIO","viewName":"default"}Example PUSH to PROD:Check Mongo, an updated entity should change COMPANYGlobalCustomerIDCheck ReltioCheck Snowflake"
},
{
"title": "Project FLEX (US):",
"pageID": "302705645",
"pageLink": "/pages/viewpage.action?pageId=302705645",
"content": ""
},
{
"title": "Batch Loads - Client-Sourced",
"pageID": "164470098",
"pageLink": "/display/GMDM/Batch+Loads+-+Client-Sourced",
"content": "Log in to US PROD Kibana: https://amraelp00006209.COMPANY.com:5601/app/kibanause the dedicated "kibana_gbiccs_user" Go to the Dashboards Tab - "PROD Batch loads"Change the Time rage Choose 24 hours to check if the new file was loaded for the last 24 hours.The Dashboard is divided into the following sections:File by type - this visualization presents how many file of the specific type were loaded during a specific time rangeFile load count - this visualization presents when the specific file was loadedFile load summary - on this table you can verify the detailed information about file loadCheck if files are loaded with the following agenda:SAP - incremental loads - max 4 files per day, min 2 files per day Agenda: whenhoursMonday-Friday 1. 01:20 CET time 2. 13:20 CET time 3. 17:20 CET time 4. 21:20 CET timeSaturday1. 01:20 CET timeSundaynoneHIN - incremental loads - 2 file per day. WKCE.*.txt and WKHH.*.txtAgenda:whenhoursTuesday-Saturday1. estimates: 12PM - 1PM CET timeDEA - full load -  1 file per week FF_DEA_IN_.*.txtAgenda:whenhoursTuesday1. estimates: 10AM - 12PM CET time340B - incremental load - 4 files per month. 340B_FLEX_TO_RELTIO_*.txtAgenda:Files uploaded on 3rd, 10th, 24th and the last day of the month at ~12:30 PM CET time. If the upload day is on the weekend, the file will be loaded on the next workday.Check if DEA file limit was not exceeded. Check "Suspended Entities" attribute. If this parameter is grater than 0, it means that DEA post processing was not invoked. Current DEA post processing limit is 22 000. To increase limit - Send the notification (7.d), after agreement do (8.)Take an action if the input files are not delivered on schedule:SAP To:  santosh.dube@COMPANY.com;Venkata.Mandala@COMPANY.com;Jayant.Srivastava@COMPANY.com;DL-GMFT-EDI-PRD-SUPPORT@COMPANY.comCC: tj.struckus@COMPANY.com;Patrick.Neuman@COMPANY.com;przemyslaw.warecki@COMPANY.com;mikolaj.morawski@COMPANY.com;Melissa.Manseau@COMPANY.com;Deanna.Max@COMPANY.com;Laura.Faddah@COMPANY.com;DL-CBK-MAST@COMPANY.com;BalaSubramanyam.Thirumurthy@COMPANY.comHINTo: santosh.dube@COMPANY.com;Venkata.Mandala@COMPANY.com;Jayant.Srivastava@COMPANY.com;DL-GMFT-EDI-PRD-SUPPORT@COMPANY.comCC: tj.struckus@COMPANY.com;Patrick.Neuman@COMPANY.com;przemyslaw.warecki@COMPANY.com;mikolaj.morawski@COMPANY.com;Melissa.Manseau@COMPANY.com;Deanna.Max@COMPANY.com;Laura.Faddah@COMPANY.com;DL-CBK-MAST@COMPANY.com; BalaSubramanyam.Thirumurthy@COMPANY.comDEATo: santosh.dube@COMPANY.com;Venkata.Mandala@COMPANY.com;Jayant.Srivastava@COMPANY.com;DL-GMFT-EDI-PRD-SUPPORT@COMPANY.comCC: tj.struckus@COMPANY.com;Patrick.Neuman@COMPANY.com;przemyslaw.warecki@COMPANY.com;mikolaj.morawski@COMPANY.com;Melissa.Manseau@COMPANY.com;Deanna.Max@COMPANY.com;Laura.Faddah@COMPANY.com;DL-CBK-MAST@COMPANY.com; BalaSubramanyam.Thirumurthy@COMPANY.comDEA - limit notificationTo: santosh.dube@COMPANY.com;tj.struckus@COMPANY.com;Melissa.Manseau@COMPANY.com;BalaSubramanyam.Thirumurthy@COMPANY.comCC: przemyslaw.warecki@COMPANY.com;mikolaj.morawski@COMPANY.comTake an action if DEA limit was exceeded. Login to each PROD hostGo to "cd /app/mdmgw/batch_channel/config/"Edit "application.yml" on each host:Change poller.inputFormats.DEA.deleteDateLimit: 22 000 to new value.Restart Components: Execute https://jenkins-gbicomcloud.COMPANY.com:8443/job/mdm_manage_playbooks/job/Microservices/job/manage_microservices__prod_us/component: mdmgw_batch-channel_1node: all_nodescommand: restartLoad the latest DEA file (MD5 checksum skips all entities, so only post-processing step will be executed) Change and commit new limit to GIT: https://github.com/COMPANY/mdm-reltio-handler-env/blob/master/inventory/prod_us/group_vars/gw-services/batch_channel.yml Example Emails:DEA limit exceeded: DEA load checkHi Team,We just received the DEA file, the current DEA post processing process is set to 22 000 limitation. The DEA load resulted in xxxx profiles to be updated in post-processing. Should I change the limit and re-process profiles ?Regards,HIN File missingHIN PROD file missingHi, Today we expected to receive new HIN files. I checked that HIN files are missing on S3 bucket. Last week we received files at <time> CET time.Here is the screenshot that presents files that we received last week:<screen from S3 bucket>Could you please verify this.Regards,"
},
{
"title": "Batch Loads - Update Addresses",
"pageID": "164469820",
"pageLink": "/display/GMDM/Batch+Loads+-+Update+Addresses",
"content": "Log in to US PROD Kibana: https://amraelp00006209.COMPANY.com:5601/app/kibanause the dedicated "kibana_gbiccs_user" Go to the Dashboards Tab - "PROD Batch loads"Change the Time rage Choose 24 hours to check if the new file was loaded for the last 24 hours.The Dashboard is divided into the following sections:File by type - this visualization presents how many file of the specific type were loaded during a specific time rangeFile load count - this visualization presents when the specific file was loadedFile load summary - on this table you can verify the detailed information about file loadFile load status count - the user name ("integration_batch_user") that executes the API and "status" - the number of requests ended with the status. To get more details o to PROD Api CallsResponse status load summary - the number of requests ended with the specific status. To get more details o to PROD Api CallsThe result report name or the details saved in Kibana contains correlation ID. example Report name: DEV_update_profiles_integration_testing_ID-5e1b4bdf7525-1574860947734-0-819_REPORT.csv example correlation ID: ID-5e1b4bdf7525-1574860947734-0-819To get more details o to PROD Api CallsSearch by the correlation ID related to the latest Addresses update file load. The following screenshot presents how many operations were invoked during the Addresses update.In this example, the input file contains 3 Customers.During the process, 3 Search API calls and 3 Attribute Updates API calls were invoked with success. DOCPlease read the following Technical Design document related to the Addresses updating process. This document contains a detailed description of the process, all inbound and outbound interface types.S3 report and distributionThe report is uploaded to the S3 location: PROD location: mdmprodamrasp42095/PROD/archive/ADDRESSES/The report is published in the AWS S3 bucket.File name format is following: “<name>_<correlation_id>.csv”Where <name> is the input file name.Where <correlation_id> is the number of the batch related to the whole addresses update process. Using the correlation number Operator can find and verify all updates send to Reltio and easily verify the status of the batch.Download the file and publish it to the SharePoint location. Send the notification to the designated mailing group. SharePoint upload location:\\\\smbgbl.drmvfs101.COMPANY.com\\gfs_cbk\\Contracts-Chargeback\\Chargebacks_Reporting\\Reltio\\Addresses Update ReportMailing group:    To: Melissa.Manseau@COMPANY.com,santosh.dube@COMPANY.com,Deanna.Max@COMPANY.com,Laura.Faddah@COMPANY.com,Xin.Sun@COMPANY.com,crystal.sawyer@COMPANY.com     CC:przemyslaw.warecki@COMPANY.com,mikolaj.morawski@COMPANY.comEmail template:FLEX Addresses updating process - Report - <generation_date>Hi,  Please be informed that the Addresses updating process report is available for verification.Report: → <SharePoint URL>Regards,Mikolaj "
},
{
"title": "Batch Loads - Update Identifiers",
"pageID": "164470070",
"pageLink": "/display/GMDM/Batch+Loads+-+Update+Identifiers",
"content": "Log in to US PROD Kibana: https://amraelp00006209.COMPANY.com:5601/app/kibanause the dedicated "kibana_gbiccs_user" Go to the Dashboards Tab - "PROD Batch loads"Change the Time rage Choose 24 hours to check if the new file was loaded for the last 24 hours.The Dashboard is divided into the following sections:File by type - this visualization presents how many file of the specific type were loaded during a specific time rangeFile load count - this visualization presents when the specific file was loadedFile load summary - on this table you can verify the detailed information about file loadFile load status count - the user name ("identifiers_batch_user") that executes the API and "status" - the number of requests ended with the status. To get more details o to PROD Api CallsResponse status load summary - the number of requests ended with the specific status. To get more details o to PROD Api CallsThe result report name or the details saved in Kibana contains correlation ID. example Report name: DEV_update_profiles_integration_testing_ID-5e1b4bdf7525-1574860947734-0-819_REPORT.csv example correlation ID: ID-5e1b4bdf7525-1574860947734-0-819To get more details o to PROD Api CallsSearch by the correlation ID related to the latest Identifiers file load. The following screenshot presents how many operations were invoked during the Identifiers update.In this example, the input file contains 3 Customers.During the process, 3 Search API calls and 3 Attribute Updates API calls were invoked with success. DOCPlease read the following Technical Design document related to the Identifiers updating process. This document contains a detailed description of the process, all inbound and outbound interface types.S3 report and distributionThe report is uploaded to the S3 location: PROD location: mdmprodamrasp42095/PROD/archive/IDENTIFIERS/The report is published in the AWS S3 bucket.File name format is following: “<name>_<correlation_id>.csv”Where <name> is the input file name.Where <correlation_id> is the number of the batch related to the whole identifiers update process. Using the correlation number Operator can find and verify all updates send to Reltio and easily verify the status of the batch.Download the file and publish it to the SharePoint location. Send the notification to the designated mailing group. SharePoint upload location:\\\\smbgbl.drmvfs101.COMPANY.com\\gfs_cbk\\Contracts-Chargeback\\Chargebacks_Reporting\\Reltio\\Identifier Update ReportMailing group:    To: Melissa.Manseau@COMPANY.com,santosh.dube@COMPANY.com,Deanna.Max@COMPANY.com,Laura.Faddah@COMPANY.com,Xin.Sun@COMPANY.com,crystal.sawyer@COMPANY.com     CC:przemyslaw.warecki@COMPANY.com,mikolaj.morawski@COMPANY.comEmail template:FLEX Identifiers updating process - Report - <generation_date>Hi,  Please be informed that the Identifiers updating process report is available for verification.Report: → <SharePoint URL>Regards,Mikolaj "
},
{
"title": "FLEX QC",
"pageID": "164470057",
"pageLink": "/display/GMDM/FLEX+QC",
"content": "AgendaThe following table presents the scheduled agenda of the process:whenhoursEach Saturday 13:00 (UTC time)The process has to be verified on Monday morning CET time. After successful verification the report has to be sent to the designated mailing group.Prometheus DashboardThere is a requirement to monitor the process after each run and send the generated comparison report. The overview Monitoring Prometheus dashboard is available here:https://mdm-monitoring.COMPANY.com/grafana/d/COVgYieiz/alerts-monitoring?orgId=1&refresh=10s&var-region=usWhen the dashboard contains GREEN color on "US PROD Airflow DAG's Status" panel -  The process ended with success.When the dashboard contains RED color on "US PROD Airflow DAG's Status" panel -  The process ended with failure. The details are available in Airflow.AirflowLog in to Airflow platform: https://cicd-gbl-mdm-hub.COMPANY.com/airflow/tree?dag_id=flex_validate_us_prod you can use admin userLogin pageGo to the "flex_validate_us_prod" JobTo check details of the specific Task, click on the Task and then in pop up window click "View Logs" *_validation_tasks - these tasks are "Sub DAG's". To verify the internal tasks click on the SUB DAG, then in pop up window click "Zoom into SUB DAG". After LOGs verification there is a possibility to re-run the process from the last failure point, To do this process the following steps:Click on the Task. In the pop-up window choose "Clear" Clearing deletes the previous state of the task instance, allowing it to get re-triggered by the scheduler or a backfill command. It means that all future tasks are cleaned and started one more time.DOCPlease read the following Technical Design document related to the FLEX Quality check process. this document contains a detailed description of the Airflow process, all inbound and outbound interfaces types.S3 report and distributionThe comparison report is uploaded to the S3 location: PROD location: mdmprodamrasp42095/verify/PROD/report/File name format is following: “comparison_report_full_<date>.csv”Where <date> is YYYYMMDDTHHMMSS (20191001T072509)Download the file and publish it to the SharePoint location. Send the notification to the designated mailing group. Report preprocessing and XLSX create:Open comparision_report_full_<data>.csv with Notepad++Because excel removed leading 000 characters the replacement needs to be done using Search mode: Regular expression. Replace all\n;"0(.*?)";\nto \n;="0\\1";\nCheck the CSV for multi-line comments (NotesText attribute). They might disturb the CSV format. Replace all\n([^"])\\n\nto \n"\\1"\n(remove the quote marks - cannot escape backslash in Confluence)Fix the header row (add the removed \\n)Save fileOpen CSV file by double click - to open this file in Excel.Click on the left top corner to mark all columns and rowsdouble click on the line between column "A" and "B" to adjust column width.Apply the "Filter" option on the Header.Verify result. Each row needs to start with a source name. Check the source column. Check if the NotesText attribute is in one row, and the format is correct.When the format is correct the source column should contain only the following values:Save the file in XLSX formatClick "File" → Save as. Choose "Save as type" = "Excel Workbook (*.xlsx)Send both CSV and XLSX format to the SharePoint location:8. As recently requested, I have deleted rows with “attributes.Name.value” error and with CXkfvVy entity. SharePoint upload location:\\\\smbgbl.drmvfs101.COMPANY.com\\gfs_cbk\\Contracts-Chargeback\\Chargebacks_Reporting\\Reltio\\Reltio QC ReportWhen uploading new files move files from the previous week to the 'archive' subfolder and upload latest files to the main folder 'Reltio QC Report'.Mailing group:    To: Manseau, Melissa <Melissa.Manseau@COMPANY.com>; Dube, Santosh R <santosh.dube@COMPANY.com>;  Faddah, Laura Jordan <Laura.Faddah@COMPANY.com>; Sun, Ivy <Xin.Sun@COMPANY.com>; Antoine, Melissa <melissa.antoine@COMPANY.com>; DL-CBK-MAST <DL-CBK-MAST@COMPANY.com>    CC: Warecki, Przemyslaw <Przemyslaw.Warecki@COMPANY.com>; Morawski, Mikolaj <Mikolaj.Morawski@COMPANY.com>; Anuskiewicz, Piotr <Piotr.Anuskiewicz@COMPANY.com>Email template:<generation_date> - each report is generated during the weekend. So for example when the report generation was executed between 01/04/2020-01/05/2020 (weekend), then the generation_date should be the same. The date format should be consistent with US notation. (MM/dd/yyyy)  e.g. 01/04/2020-01/05/2020<SharePoint URL> - the URL in the email needs to be formated, because due to the spaces in the path. FLEX QC result - Report - <generation_date>Hi,Please be informed that the new QC report is available for verification.Report: → <SharePoint URL>Best Regards,KarolContact: BalaSubramanyam.Thirumurthy@COMPANY.com,santosh.dube@COMPANY.com when FLEX/HIN/DEA file is missing.Contact: Venkata.Mandala@COMPANY.com Chakrapani.Kruthiventi@COMPANY.com,santosh.dube@COMPANY.com when SAP file is missing.Contact: santosh.dube@COMPANY.com,Venkata.Mandala@COMPANY.com,Jayant.Srivastava@COMPANY.com,DL-GMFT-EDI-PRD-SUPPORT@COMPANY.com - With GIS FILE transfer problem (missing files)14/02/2023Hi Karol,You can remove me from this distribution going forward.Thanks,Deanna K. Max27/02/2023Hi Karol,Ive moved to a new role and no longer need to be apart of this distribution. Can you please remove me?Regards,Crystal Sawyer "
},
{
"title": "Generate events to prod-out-full-gblus-flex-all*.json file",
"pageID": "333156205",
"pageLink": "/display/GMDM/Generate+events+to+prod-out-full-gblus-flex-all*.json+file",
"content": "Go to gblmdmhubprodamrasp101478/us/prod/inbound/oneview-cov/prod-out-full-gblus-flex-all (concat_s3_files_gblus_prod input directory)Copy files for desired period of time to your local workspaceDownload attached script and modify events variableExecute attached script in the directory below downloaded files. It will find the latest event for every element in events list and store them in agregated_events.jsonArrange with the person requesting event generation that they stop the process for 24h. When they stop the process, you can add the found events to a file in gblmdmhubprodamrasp101478/us/prod/inbound/oneview-cov/inbound s3 directoryAfter file is modified thay can start ingestion process and verify if events were properly generatedfindEvents.sh"
},
{
"title": "Re-Loading SAP/HIN/DEA Files After Batch Channel Stopped",
"pageID": "164470077",
"pageLink": "/pages/viewpage.action?pageId=164470077",
"content": "These are the steps to be taken to correctly process SAP/HIN/DEA files after mdmgw_batch_channel docker container is stopped on PROD and has to be restarted:Create an emergency RFC for this actionChange configuration of the batch_channel component on PROD1 (amraelp00006207) under /app/mdmgw/batch_channel/config/application.yml:change relativePathPattern: DEA/.* to relativePathPattern: DEA_LOAD/.*change relativePathPattern: HIN/.* to relativePathPattern: HIN_LOAD/.*change relativePathPattern: SAP/.* to relativePathPattern: SAP_LOAD/.*This is required because GIS publishes files to */DEA/HIN/SAP automatically and we don't want to consume them during the fix.     3. Empty all /inbound/* directories by moving all files from:/inbound/SAP to /archive/SAP_tmp/inbound/DEA to /archive/DEA_tmp/inbound/HIN to /archive/HIN_tmp4. After inbound directories are empty start batch_channel component on PROD1 (amraelp00006207). Process files in FIFO order by moving them in order from:/archive/SAP_tmp to /inbound/SAP_LOAD/archive/DEA_tmp to /inbound/DEA_LOAD/archive/HIN_tmp to /inbound/HIN_LOAD5. After these files are processes stop batch_channel on PROD1 (amraelp00006207).6. Restore configuration on PROD1 under /app/mdmgw/batch_channel/config/application.yml:relativePathPattern: DEA_LOAD/.* to relativePathPattern: DEA/.* relativePathPattern: HIN_LOAD/.* to relativePathPattern: HIN/.* relativePathPattern: SAP_LOAD/.* to relativePathPattern: SAP/.* 7. Start batch_channel on PROD1, PROD2 and PROD3 waiting 1 minute before start on each subsequent node.8. Check if nodes started and clustered correctly:"java.lang.IllegalStateException: Zookeeper based route policy prohibits processing exchanges, stopping route and failing the exchange" should be seen in /app/mdmgw/batch_channel/log/application.log on 2 nodes (usually node 2 and node 3)"Candidatenode '/batchChannel/batch-channel-prod/' has been created" message appears in the log on one node (usually node 1).9. Move previously processed files from /archive/*_load to /archive/*"
},
{
"title": "S3 keys replacement",
"pageID": "379129646",
"pageLink": "/display/GMDM/S3+keys+replacement",
"content": "PROD ( amraelp00006207, amraelp00006208, amraelp00006209):Remember that the replacement has to be done on all three instances!Replace keys for batch channel and do recreate containers. /app/mdmgw/batch_channel/config/application.yml      2.  Replace keys for reltio subscriber and do recreate containers/app/mdmhub/reltio_subscriber/config/application.yml     3. Replace keys for archiver and do not recreate containers/app/archiver/config/archiver.env    4. Replace keys for airflow dags https://cicd-gbl-mdm-hub.COMPANY.com/airflow/homeNPROD (DEV / TEST - amraelp00005781): Replace keys for batch channel and recreate containers. /app/mdmgw/dev-mdm-srv/batch_channel/config/application.yml/app/mdmgw/test-mdm-srv/batch_channel/config/application.ymlAfter manual replacement in the components:Replace keys in the repository:Use replace_aws_keys.sh to find and replace keys in the repository. Deploy changes! MDM Hub Deploy Jobs and MDM Gateway Deploy Jobs"
},
{
"title": "Project Highlander:",
"pageID": "302705635",
"pageLink": "/pages/viewpage.action?pageId=302705635",
"content": ""
},
{
"title": "Highlander IDL Quality Check",
"pageID": "164470068",
"pageLink": "/display/GMDM/Highlander+IDL+Quality+Check",
"content": "It is required to check HCO and HCP counts at selected checkpoins of C8 flow and document it.CheckpointsReltiocounts are fetched using Reltio API (see procedures)HUBafter HUB refresh is completed - all events processed and error queue is emptycounts per country are compared to ReltioNexusafter data are spooled from Mongo and  DM  is refreshed.data are compared to HUBHUB (C8 filters)only active profiles having at least on active crosswalk from MI or OK are includedcounts are retrieved from Mongo using a query with filters on C8 constrainstsNexus (C8)records exported to C8 files are counted and compared to HUBODSrecords published to CMD are counted  and compared to Nexus (C8)DocumentPlease create document using the template.ProceduresRetrieving counts from  ReltioCall following APITo get HCP counts\nGET https://{{url}}/reltio/api/{{tenantID}}/entities/_facets?facet=type,attributes.Country&options=searchByOv&max=2000&filter=equals(type,'HCP') and in(attributes.Country,"AI,AN,AG,AR,AW,BS,BB,BZ,BM,BO,BR,CL,CO,CR,CW,DO,EC,GT,GY,HN,JM,KY,LC,MX,NI,PA,PY,PE,PN,SV,SX,TT,UY,VG,VE")\nTo get HCO counts:\nGET https://{{url}}/reltio/api/{{tenantID}}/entities/_facets?facet=type,attributes.Country&options=searchByOv&max=2000&filter=equals(type,'HCO') and in(attributes.Country,"AI,AN,AG,AR,AW,BS,BB,BZ,BM,BO,BR,CL,CO,CR,CW,DO,EC,GT,GY,HN,JM,KY,LC,MX,NI,PA,PY,PE,PN,SV,SX,TT,UY,VG,VE")\nRetrieving counts from HUB (global)Query\ndb.getCollection("entityHistory").aggregate(\n\t// Pipeline\n\t[\n\t\t// Stage 1\n\t\t{\n\t\t\t$match: {\n\t\t\t "$and" : [\n\t\t\t {"status" : "ACTIVE"}, \n\t\t\t {"country" : {\n\t\t\t "$in" : [\n\t\t\t "ai", \n\t\t\t "an", \n\t\t\t "ag", \n\t\t\t "ar", \n\t\t\t "aw", \n\t\t\t "bs", \n\t\t\t "bb", \n\t\t\t "bz", \n\t\t\t "bm", \n\t\t\t "bo", \n\t\t\t "br", \n\t\t\t "cl", \n\t\t\t "co", \n\t\t\t "cr", \n\t\t\t "cw", \n\t\t\t "do", \n\t\t\t "ec", \n\t\t\t "gt", \n\t\t\t "gy", \n\t\t\t "hn", \n\t\t\t "jm", \n\t\t\t "ky", \n\t\t\t "lc", \n\t\t\t "mx", \n\t\t\t "ni", \n\t\t\t "pa", \n\t\t\t "py", \n\t\t\t "pe", \n\t\t\t "pn", \n\t\t\t "sv", \n\t\t\t "sx", \n\t\t\t "tt", \n\t\t\t "uy", \n\t\t\t "vg", \n\t\t\t "ve"\n\t\t\t ]\n\t\t\t }}\n\t\t\t ] \n\t\t\t}\n\t\t},\n\t\t// Stage 2\n\t\t{\n\t\t\t$group: {\n\t\t\t_id: {entityType: "$entityType", country: "$country" }, count: { $sum: 1 }\n\t\t\t}\n\t\t},\n\n\t]\n);\n\n\nRetrieving counts from HUB (C8 filters)Query\ndb.getCollection("entityHistory").aggregate(\n // Pipeline\n [\n // Stage 1\n {\n $match: {\n "$and" : [\n {"status" : "ACTIVE"},\n {"country" : {\n "$in" : [\n "ai", \n "an", \n "ag", \n "ar", \n "aw",\n "bs",\n "bb",\n "bz",\n "bm",\n "bo",\n "br",\n "cl",\n "co",\n "cr",\n "cw",\n "do",\n "ec",\n "gt",\n "gy",\n "hn",\n "jm",\n "ky",\n "lc",\n "mx",\n "ni",\n "pa",\n "py",\n "pe",\n "pn",\n "sv",\n "sx",\n "tt",\n "uy",\n "vg",\n "ve"\n ]\n }},\n {\n "entity.crosswalks" : {\n "$elemMatch" : {\n "type" : {\n "$in" : [\n "configuration/sources/OK",\n "configuration/sources/CRMMI",\n "configuration/sources/Reltio" \n ]\n },\n "deleteDate" : {\n "$exists" : false\n }\n }\n }\n }\n ] \n }\n },\n \n // Stage 2\n {\n $addFields: {\n "market": \n {"$switch": {\n branches: [\n { case: {"$in" : [ "$country", ["ag","ai","aw","bb","bs","cr","do","gt","hn","jm","lc","ni","pa","sv","tt","vg","cw","sx" ]]}, then: "ac" },\n { case: {"$in" : [ "$country", ["uy" ]]}, then: "ar" }\n ],\n default: "$country"\n } \n }\n }\n },\n \n // Stage 3\n {\n $group: {\n _id: {entityType: "$entityType", market: "$market" }, count: { $sum: 1 }\n }\n },\n \n ]\n);\n\n\n\n"
},
{
"title": "RawData:",
"pageID": "347666020",
"pageLink": "/pages/viewpage.action?pageId=347666020",
"content": ""
},
{
"title": "Restore raw entity data",
"pageID": "347666025",
"pageLink": "/display/GMDM/Restore+raw+entity+data",
"content": "The following SOP describes how to restore raw entity data.Steps:Login to UIGo to HUB Admin →  Restore Raw Data → Restore entitiesFill in the filters    a) Source environment - restore data from other environment (restore QA on DEV), default value will restore data from currently logged environment    b) Entity type - restore data only for selected entity types - requires at least one selected    c) Countries - restore data only for selected countries    d) Sources - restore data only for selected sources    e) Restore entities created after - only entities created after this date will be restored  Click the execute buttonValidate the results in Kibana API Calls Kibana"
},
{
"title": "Restore raw relation data",
"pageID": "347666056",
"pageLink": "/display/GMDM/Restore+raw+relation+data",
"content": "Steps:Login to UIGo to HUB Admin →  Restore Raw Data → Restore relationsFill in the filters    a) Source environment - restore data from other environment (restore QA on DEV), default value will restore data from currently logged environment    b) Countries - restore data only for selected countries    c) Sources - restore data only for selected sources    d) Relation types - restore data only for selected relation type    e) Restore relations created after - only relations created after this date will be restored  Click the execute buttonValidate the results in Kibana API Calls Kibana"
},
{
"title": "Reconciliation:",
"pageID": "164470071",
"pageLink": "/pages/viewpage.action?pageId=164470071",
"content": ""
},
{
"title": "How to Start the Reconciliation Process",
"pageID": "164470058",
"pageLink": "/display/GMDM/How+to+Start+the+Reconciliation+Process",
"content": "This procedure describes the reconciliation process between Reltio and Mongo. The result of this process is the Entities and Relations events generated for the HUB internal Kafka topics.       0. Check if the entityHistory and entityRelations contains the following indexes:entityHistory db.entityHistory.createIndex({country: -1}, {background: true, name: "idx_country"}); db.entityHistory.createIndex({sources: -1}, {background: true, name: "idx_sources"}); db.entityHistory.createIndex({entityType: -1}, {background: true, name: "idx_entityType"}); db.entityHistory.createIndex({status: -1}, {background: true, name: "idx_status"}); db.entityHistory.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"}); db.entityHistory.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"}); db.entityHistory.createIndex({"entity.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_v_asc"}); db.entityHistory.createIndex({"entity.crosswalks.type": 1}, {background: true, name: "idx_crosswalks_t_asc"});entityRelations db.entityRelations.createIndex({country: -1}, {background: true, name: "idx_country"}); db.entityRelations.createIndex({sources: -1}, {background: true, name: "idx_sources"}); db.entityRelations.createIndex({entityType: -1}, {background: true, name: "idx_relationType"}); db.entityRelations.createIndex({status: -1}, {background: true, name: "idx_status"}); db.entityRelations.createIndex({creationDate: -1}, {background: true, name: "idx_creationDate"}); db.entityRelations.createIndex({lastModificationDate: -1}, {background: true, name: "idx_lastModificationDate"}); db.entityRelations.createIndex({startObjectId: -1}, {background: true, name: "idx_startObjectId"}); db.entityRelations.createIndex({endObjectId: -1}, {background: true, name: "idx_endObjectId"}); db.getCollection("entityRelations").createIndex({"relation.crosswalks.value": 1}, {background: true, name: "idx_crosswalks_asc"});Export Reltio DataTODOImport the Reltio Data to Mongo:Check the following required variables in the mdm-reltio-handler-env/inventory/prod/group_vars/mongo/all.ymlGBL PROD Example:mongo_install_dir: /app/mongohub_db_reltio_user: "mdm_hub"hub_db_reltio_●●●●●●●●●●●●● secret_hub_db_reltio_●●●●●●●●●●●●hub_db_admin_user: adminhub_db_admin_●●●●●●●●●●●●● secret_hub_db_admin_●●●●●●●●●●●●hub_db_name: reltio#COMPENSATION EVENTS VARIABLES:MONGO_URL: "10.12.199.141:27017"reltio_entities_export_url_name: "https://reltio-data-exports.s3.amazonaws.com/entities/pfe_mdm_api/2019/25-Feb-2019/fw2ztf8k3jpdffl_14-21_entities_bbf5.zip..."reltio_entities_export_file_name: "fw2ztf8k3jpdffl_14-21_entities_bbf5" # THE SAME AS FILE NAME FROM URLreltio_entities_export_date_timestamp_ms: "1551052800000" # RETIO EXPORT DATEreltio_entities_export_LAST_date_timestamp_ms: "1548288000000" # RETIO LAST EXPORT DATE. Do not SET when you want to do the reconciliation on all entitiesreltio_relations_export_url_name: "https://reltio-data-exports.s3.amazonaws.com/relations/pfe_mdm_api/2019/25-Feb-2019/fw2ztf8k3jpdffl_14-21_relations_afa6.zip..."reltio_relations_export_file_name: "fw2ztf8k3jpdffl_14-21_relations_afa6" # THE SAME AS FILE NAME FROM URLreltio_relations_export_date_timestamp_ms: "1551052800000" # RETIO EXPORT DATEreltio_relations_export_LAST_date_timestamp_ms: "1548806400000" # RETIO LAST EXPORT DATE. Do not SET when you want to do the reconciliation on all entitiesKAFKA_BOOTSTRAP_SERVERS: "10.192.70.189:9094,10.192.70.156:9094,10.192.70.159:9094"kafka_import_events_user: "hub_prod"kafka_import_events_●●●●●●●●●●●●● secret_kafka_import_events_●●●●●●●●●●●●kafka_import_events_truststore_●●●●●●●●●●●●● secret_kafka_import_events_truststore_●●●●●●●●●●●●internal_reltio_events_topic: "prod-internal-reltio-events"internal_reltio_relations_topic: "prod-internal-reltio-relations-events"reconciliate_entities: True # set To False when you want to do the reconciliation only for relationsreconciliate_relations: True #set To False when you want to do the reconciliation only for entitiesFor US PROD Set additional parameters:external_user_id: 25084803external_group_id: 20796763On the new files set only reltio_entities_export_.*  or reltio_relations_export_.* variables. According to the export date time and file name.check PRIMARYCheck which Mongo instance is PRIMARY. If the first instance is primary execute ansbile playbooks with --limit mongo1 parameter. Otherwise change --limit attribute to other nodeExecute: ansible-playbook extract_reltio_data.yml -i inventory/prod/inventory --limit mongo1 --vault-password-file=ansible.secretCheck logs Execute: docker logs --tail 1000 mongo_mongoimport_<date> -fWait until container will stop then go to the next step.Create indexes on imported collections: db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({uri: -1}, {background: true, name: "idx_uri"}); db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({type: -1}, {background: true, name: "idx_type"}); db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({createdTime: -1}, {background: true, name: "idx_createdTime"}); db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({updatedTime: -1}, {background: true, name: "idx_updatedTime"}); db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({"attributes.Country.lookupCode": -1}, {background: true, name: "idx_country"}); db.getCollection("fw2ztf8k3jpdffl_15-55_entities_9d83").createIndex({"crosswalks.value": -1}, {background: true, name: "idx_crosswalks"}); db.getCollection("fw2ztf8k3jpdffl_14-21_relations_afa6").createIndex({uri: -1}, {background: true, name: "idx_uri"}); db.getCollection("fw2ztf8k3jpdffl_14-21_relations_afa6").createIndex({updatedTime: -1}, {background: true, name: "idx_updatedTime"}); db.getCollection("fw2ztf8k3jpdffl_14-21_relations_afa6").createIndex({"crosswalks.value": -1}, {background: true, name: "idx_crosswalks"});  Wait until indexes are buildExecute:docker logs --tail 1000 mongo_mongo_1 -fBased on the imported Reltio data generate missing events:Execute: ansible-playbook generate_compensation_events.yml -i inventory/prod/inventory --limit mongo1 --vault-password-file=ansible.secretWait until the docker containers stop. ETA: 1h - 1h 30minCheck docker logsVerify the .*_compensation_result collections. Check the number of Events for each type for entities: HCP_CREATED | HCO_CREATED HCP_CHANGED | HCO_CHANGEDHCP_MERGED | HCO_MERGED | HCP_LOST_MERGE | HCO_LOST_MERGEHCP_REMOVED | HCO_REMOVEDCheck the number of Events for each type for relations:RELATIONSHIP_CREATEDRELATIONSHIP_CHANGEDRELATIONSHIP_MERGEDRELATIONSHIP_LOST_MERGERELATIONSHIP_REMOVEDCheck if the count do not contain the anomalies. Verify the problem if exists. Check the logs in the /app/mongo/compensation_events/scripts_entities/.*.out. Check if the logs contain "REPORT AN ERROR TO Reltio" - analyse the problem and report the issue to Reltio. Check the logs in the /app/mongo/compensation_events/scripts_relations/.*.out. Check if the logs contain "REPORT AN ERROR TO Reltio" - analyse the problem and report the issue to Reltio. When all the events are correct generate events to Kafka internal topic: Execute: ansible-playbook generate_compensation_events_kafka.yml -i inventory/prod/inventory --limit mongo1 --vault-password-file=ansible.secretVerify the internal kafka topics and docker logs. "
},
{
"title": "Hub Reconciliation Monitoring",
"pageID": "273707408",
"pageLink": "/display/GMDM/Hub+Reconciliation+Monitoring",
"content": "Check Reconciliation dashboardCheck reconciliation dashboard for every environmento on every monday. Ensure that set timespan corresponds with time of last reconciliation(friday-sunday):UrlsEMEA PROD Reconciliation dashboardGBL PROD Reconciliation dashboardAMER PROD Reconciliation dashboardGBLUS PROD Reconciliation dashboardAPAC PROD Reconciliation dashboardSTART -  the number of entities/relations/mergeTree that the reconciliation started forEND -  the number of entities/relations/mergeTree that were fully processed(Calculated checksum and checksum from Reltio export differ)REJECTED - to check the number of entities/relations/mergeTree that were rejected(Calculated checksum and checksum from Reltio export are the same)IssuesENTITIES/RELATION/MERGETREE START/REJECTED/END == 0 → Check reconciliation topics if there were produced and consumed events during last weekend → Check airflow dagsENTITIES/RELATION/MERGETREE END > 50k → Check HUB EVENTS dashboard→ Check snowflakeCheck HUB EVENTS dashboardHUB events dashboard describes events that were processed by event publisher and sent to output topics(clients/snowflake)UrlsEMEA PROD: https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/emea-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))GBL PROD: https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/gbl-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))AMER PROD: https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/amer-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))GBLUS PROD: https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/gblus-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))APAC PROD: https://kibana-apac-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/apac-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))Aplied filter in kibana dashboardmetadata.HUB_RECONCILIATION: trueAppling above filter we receive all reconciliation events that were processed by our streaming channel. Now we need to analyze two cases:comment field == 'No change in data detected (Entity MD5 checksum did not change), ignoring.'Although these events checksums differed during reconciliation calculation, after recalculating checksum in entity-enricher, the events were found to be the same. In that case we should check reltio exportcomment field != 'No change in data detected (Entity MD5 checksum did not change), ignoring.'This situation means that those events are really different and needed to be reconciled. For these entities/relations we send update event to snowflake topic. That's standard process but number of such events shouldn't be to big. If it exceeds 50k then we should analyse what have changed in snowflake(Check snowflake) and check if everything is appropriate.Please check events 5 HCPs, 5 HCOs and 5 relations from different time periods. Eg, the first hour of reconciliation, the middle of reconciliation and the last hour of reconciliation.Check reltio exportWe should download Reltio export used during reconciliation from s3 bucket. We can check archivisation path in hub_reconciliation_v2_* dags configuration:E.g.For AMER PROD: gblmdmhubprodamrasp101478/amer/prod/inbound/hub/hub_reconciliation/entities/archive/Check snowflakeWe should compare the last event to the previous one and see if there are any problems. We can use similar query:\nselect * from landing.HUB_KAFKA_DATA where record_metadata:key='entities/GOyJxoA' ORDER BY record_metadata:CreateTime desc limit 10;\nIf there is only one rekord in snowflake HUB_KAFKA_DATA this means that retention time has passed and we do not have data any data to compare to. In this case we can check object in reltio. Unfortunately Reltio doesn't keep all changes(eg. rdm changes) so checking in reltio doesn't always provide explanation.Check object in reltioUnfortunately Reltio doesn't keep all changes(eg. rdm changes) so checking in reltio doesn't always provide explanation what has changed. This solution should be used as a last resort.To compare objects in reltio we need to performr Reltio api requests with time parameter.Time parameter allows you to get the object in the state it was in at selected timeSteps:Find object in Reltio UIFind last update date Perform Reltio api request without time parameter\ncurl --location --request GET 'https://eu-360.reltio.com/reltio/api/Xy67R0nDA10RUV6/entities/PcepVgw?options=ovOnly' \\\n--header 'Authorization: Bearer 357b69a4-4709-43b8-95df-06ef9839599f'\nPerform Reltio api request with time parameter\ncurl --location --request GET 'https://eu-360.reltio.com/reltio/api/Xy67R0nDA10RUV6/entities/PcepVgw?options=ovOnly&time=1663064886000' \\\n--header 'Authorization: Bearer 357b69a4-4709-43b8-95df-06ef9839599f'\nCompare resultsCheck reconciliations topicsCheck if new events showed up on reconciliation topic on last dag run and if those events were consumed:EMEA PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=emea_prod&var-kube_env=emea_prod&var-topic=emea-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=AMER PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=amer_prod&var-kube_env=amer_prod&var-topic=amer-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=GBL PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=gbl_prod&var-kube_env=gbl_prod&var-topic=gbl-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=APAC PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=apac_prod&var-kube_env=apac_prod&var-topic=apac-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=GBLUS PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=gblus_prod&var-kube_env=gblus_prod&var-topic=gblus-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=If there were no events generated during last weekend then please check airflow dags.If events were generated but not processed the please check mdmhub reconciliation service configuration.Check airflow dags If there is any issue please verify corresponding airflow dags. None of subsequent stages should be failed:https://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_amer_prodhttps://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_gblus_prodhttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_emea_prodhttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_gbl_prodhttps://airflow-apac-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_apac_prodRaport:Every reconciliation check should be finished with short raport posted on teams chatEnvEntities ENDRelation ENDMerges ENDSummmary(OK/NOK)CommentEMEA PRODGBL PRODAMER PRODGBLUS PRODAPAC PRODCheck Reconciliation dashboardCheck reconciliation dashboard for every environmento on every monday. Ensure that set timespan corresponds with time of last reconciliation(friday-sunday):UrlsEMEA PROD Reconciliation dashboardGBL PROD Reconciliation dashboardAMER PROD Reconciliation dashboardGBLUS PROD Reconciliation dashboardAPAC PROD Reconciliation dashboardSTART -  the number of entities/relations/mergeTree that the reconciliation started forEND -  the number of entities/relations/mergeTree that were fully processed(Calculated checksum and checksum from Reltio export differ)REJECTED - to check the number of entities/relations/mergeTree that were rejected(Calculated checksum and checksum from Reltio export are the same)IssuesENTITIES/RELATION/MERGETREE START/REJECTED/END == 0 → Check reconciliation topics if there were produced and consumed events during last weekend → Check airflow dagsENTITIES/RELATION/MERGETREE END > 50k → Check HUB EVENTS dashboard→ Check snowflakeCheck HUB EVENTS dashboardHUB events dashboard describes events that were processed by event publisher and sent to output topics(clients/snowflake)UrlsEMEA PROD: https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/emea-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))GBL PROD: https://kibana-emea-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/gbl-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))AMER PROD: https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/amer-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))GBLUS PROD: https://kibana-amer-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/gblus-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))APAC PROD: https://kibana-apac-prod-gbl-mdm-hub.COMPANY.com/app/dashboards#/view/apac-prod-hub-events-dashboard?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-4d,to:now))Aplied filter in kibana dashboardmetadata.HUB_RECONCILIATION: trueAppling above filter we receive all reconciliation events that were processed by our streaming channel. Now we need to analyze two cases:comment field == 'No change in data detected (Entity MD5 checksum did not change), ignoring.'Although these events checksums differed during reconciliation calculation, after recalculating checksum in entity-enricher, the events were found to be the same. In that case we should check reltio exportcomment field != 'No change in data detected (Entity MD5 checksum did not change), ignoring.'This situation means that those events are really different and needed to be reconciled. For these entities/relations we send update event to snowflake topic. That's standard process but number of such events shouldn't be to big. If it exceeds 50k then we should analyse what have changed in snowflake(Check snowflake) and check if everything is appropriate.Please check events 5 HCPs, 5 HCOs and 5 relations from different time periods. Eg, the first hour of reconciliation, the middle of reconciliation and the last hour of reconciliation.Check reltio exportWe should download Reltio export used during reconciliation from s3 bucket. We can check archivisation path in hub_reconciliation_v2_* dags configuration:E.g.For AMER PROD: gblmdmhubprodamrasp101478/amer/prod/inbound/hub/hub_reconciliation/entities/archive/Check snowflakeWe should compare the last event to the previous one and see if there are any problems. We can use similar query:\nselect * from landing.HUB_KAFKA_DATA where record_metadata:key='entities/GOyJxoA' ORDER BY record_metadata:CreateTime desc limit 10;\nIf there is only one rekord in snowflake HUB_KAFKA_DATA this means that retention time has passed and we do not have data any data to compare to. In this case we can check object in reltio. Unfortunately Reltio doesn't keep all changes(eg. rdm changes) so checking in reltio doesn't always provide explanation.Check object in reltioUnfortunately Reltio doesn't keep all changes(eg. rdm changes) so checking in reltio doesn't always provide explanation what has changed. This solution should be used as a last resort.To compare objects in reltio we need to performr Reltio api requests with time parameter.Time parameter allows you to get the object in the state it was in at selected timeSteps:Find object in Reltio UIFind last update date Perform Reltio api request without time parameter\ncurl --location --request GET 'https://eu-360.reltio.com/reltio/api/Xy67R0nDA10RUV6/entities/PcepVgw?options=ovOnly' \\\n--header 'Authorization: Bearer 357b69a4-4709-43b8-95df-06ef9839599f'\nPerform Reltio api request with time parameter\ncurl --location --request GET 'https://eu-360.reltio.com/reltio/api/Xy67R0nDA10RUV6/entities/PcepVgw?options=ovOnly&time=1663064886000' \\\n--header 'Authorization: Bearer 357b69a4-4709-43b8-95df-06ef9839599f'\nCompare resultsCheck reconciliations topicsCheck if new events showed up on reconciliation topic on last dag run and if those events were consumed:EMEA PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=emea_prod&var-kube_env=emea_prod&var-topic=emea-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=AMER PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=amer_prod&var-kube_env=amer_prod&var-topic=amer-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=GBL PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=gbl_prod&var-kube_env=gbl_prod&var-topic=gbl-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=APAC PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=apac_prod&var-kube_env=apac_prod&var-topic=apac-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=GBLUS PROD: https://mdm-monitoring.COMPANY.com/grafana/d/h5IgYmemk/kafka-topics-overview?orgId=1&refresh=30s&from=now-7d&to=now&var-env=gblus_prod&var-kube_env=gblus_prod&var-topic=gblus-prod-internal-reltio-reconciliation-events&var-instance=All&var-node=If there were no events generated during last weekend then please check airflow dags.If events were generated but not processed the please check mdmhub reconciliation service configuration.Check airflow dags If there is any issue please verify corresponding airflow dags. None of subsequent stages should be failed:https://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_amer_prodhttps://airflow-amer-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_gblus_prodhttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_emea_prodhttps://airflow-emea-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_gbl_prodhttps://airflow-apac-prod-gbl-mdm-hub.COMPANY.com/tree?dag_id=hub_reconciliation_v2_apac_prodRaport:Every reconciliation check should be finished with short raport posted on teams chatEnvEntities ENDRelation ENDMerges ENDSummmary(OK/NOK)CommentEMEA PRODGBL PRODAMER PRODGBLUS PRODAPAC PROD"
},
{
"title": "Verifying Reconciliation Results",
"pageID": "164470187",
"pageLink": "/display/GMDM/Verifying+Reconciliation+Results",
"content": "Run reconciliation dag in airflow for given entities, relations, merge-treeGBLUS DEV - http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_gblus_devGBLUS QA - http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_gblus_qaGBLUS STAGE - http://euw1z1dl039.COMPANY.com:8080/airflow/tree?dag_id=hub_reconciliation_v2_gblus_stageAfter reconciliation is finished go to kibana to make verification (https://mdm-log-management-gbl-us-nonprod.COMPANY.com:5601/app/kibana#)Go to Discover dashboard and choose from dropdown list appropriate filter: docker.<env>switch to Lucenechoose the correct time rangechoose the correct index docker.<env>Add following custom filters  tag is depending on environment, it can bedocker.dev.mdm-hub-reconciliation-servicedocker.qa.mdm-hub-reconciliation-servicedocker.stage.mdm-hub-reconciliation-servicedocker.prod.mdm-hub-reconciliation-servicedata.logger_name, choose if you want to check reconciliation type:com.COMPANY.mdm.reconciliation.stream.ReconciliationMergeLogic for mergeTree com.COMPANY.mdm.reconciliation.stream.ReconciliationLogic - for entities/relationsTo check only entities in the search box write entities  to select only one object type (using LUCENE type)To check only relations in the search box write relation  to select only one object type (using LUCENE type)data.message is START - to check the number of entities/relations/mergeTree that the reconciliation started fordata.message is END - to check the number of entities/relations/mergeTree that were fully processeddata.message is REJECTED - to check the number of entities/relations/mergeTree that were rejectedchoose the appropriate time of reconciliation processingDifferences verification between export and mongofind URI of the object to verify in kibanacheck the Event Publisher dashboard for this uri, if the Reconciliation process detected this as a difference (END) and in the Publisher dasbhaord there is a comment "No change in data detected (Entity MD5 checksum did not change), ignoring." it means something is wrong and you can compare the Reltio export entity with Mongo Entity.download export from S3 (us/<env>/inbound/hub/hub_reconciliation/<object_type>/archive)find the JSON in the part_ files - "zgrep "entities/<id>" part-00*"save the JSON to the file that will be passed to the calculateChecksum.groovy script - file format:[json,json]process exported object using calculateChecksum.groovy from docker and save the objectModify the script:add  EntityKt filteredEntity = EntityFilter.filter to the reconciliation event output so you can check the whole JSON in the output filechange to the outfile.append(uri + "|" + newLine + "\\n")check the file for reference and use this calculateChecksum.groovyScript RUN:Run with the following parameters: D:\\docs\\EMEA\\Reconciliation_PROCESS\\entities\\part_01020222.txt entities FULL COMPANYCustID 1 https://api-emea-prod-gbl-mdm-hub.COMPANY.com:8443/prod/gw bhWpathentities/relations/merge_treeFULL - to get full JSON compare MD5this is from the DAG config - hub_reconciliation_v2.yml.params.nonOvAttrToIncludemanager URLmanager API KEYOutput file is in the - D:\\opt\\kafka_utils\\dataexport object with the same uri from mongo db using simple json formatcompare those two export using some compare tool, but before reformat those jsonsUse Intellij compare two JSON files function"
},
{
"title": "Snowflake:",
"pageID": "337856693",
"pageLink": "/pages/viewpage.action?pageId=337856693",
"content": ""
},
{
"title": "How to fix issue in Reltio Parser with lookup typos",
"pageID": "337858475",
"pageLink": "/display/GMDM/How+to+fix+issue+in+Reltio+Parser+with+lookup+typos",
"content": "This procedure shows how to manage typos in lookup codes that can resolve to the same alias in Snowflake, producing errors in Reltio Configuration ParserGo to ReltioConfigurations  collection in MongoDBFind configurations with typo that you want to fix (one by one or with filters)Using Edit Document option, open each affected configuration and find attribute with wrong lookupCodeFix typos and save changesExample with screenshotsIn this example we fix added white symbol at the end of "DCRType" lookup code on APAC DEV. We go to this environment:Find our configurations:Check them for possible typo:Fix it in each affected configuration and save. This ensures that next parsing will be successfull."
},
{
"title": "SSL Certificates:",
"pageID": "218453496",
"pageLink": "/pages/viewpage.action?pageId=218453496",
"content": ""
},
{
"title": "Generating a CSR",
"pageID": "218454469",
"pageLink": "/display/GMDM/Generating+a+CSR",
"content": "Go to the configuration repository (mdm-hub-env-config).Find the expiring certificate.KongFor KONG / KAFKA FLEX PROD mdm-hub-env-config/ssl_certs/prod_us/certs/mdm-ihub-us-trade-prod.COMPANY.com.key Certificate should be in ssl_certs/{{ env }}/certs/{{ url }}.pemFor example: ssl_certs/prod/certs/mdm-gateway.COMPANY.com.pemWe will generate our new certificate from the existing private key. Private key is in the same directory as certificate, ending with .key extension.Copy it to some temporary directory and decrypt:\nanuskp@CF-341562:/mnt/c/Users/panu/gitrep/mdm-hub-env-config/ssl_certs/prod/certs$ ls -l\ntotal 32\n-rwxrwxrwx 1 anuskp anuskp 7353 Nov 12 11:59 mdm-gateway.COMPANY.com.key\n-rwxrwxrwx 1 anuskp anuskp 24459 Jan 28 15:05 mdm-gateway.COMPANY.com.pem\nanuskp@CF-341562:/mnt/c/Users/panu/gitrep/mdm-hub-env-config/ssl_certs/prod/certs$ cp mdm-gateway.COMPANY.com.key ~/temp\nanuskp@CF-341562:/mnt/c/Users/panu/gitrep/mdm-hub-env-config/ssl_certs/prod/certs$ cd ~/temp\nanuskp@CF-341562:~/temp$ ansible-vault decrypt ./mdm-gateway.COMPANY.com.key --vault-password-file=~/ap\nDecryption successful\nContents of this file are confidential. Do not share it with anyone outside of your Team.Generate a CSR from the private key:CSR Value GuidlinesDuring last Certificate request we received below CSR guidlines:Common Name: Needs to have FQDNOrganizational Unit: No specific requirement -  optional attribute.Organization: COMPANY, Inc                NOT  COMPANY [OR]  COMPANY Inc  [OR] COMPANY Inc.Locality: City or Location must be spelled correctly. No abbreviations allowedState: Must use full name of State or Province, no abbreviations allowedCountry: US (Always use 2 char. Country code)Key Size: at least 2048 is recommended.\nanuskp@CF-341562:~/temp$ openssl req -new -key mdm-gateway.COMPANY.com.key -out mdm-gateway.COMPANY.com.csr\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:US\nState or Province Name (full name) [Some-State]:Connecticut\nLocality Name (eg, city) []:Groton\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:COMPANY, Inc\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:mdm-gateway-int.COMPANY.com\nEmail Address []:DL-ATP_MDMHUB_SUPPORT_PROD@COMPANY.com\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge password []:\nAn optional company name []:\nanuskp@CF-341562:~/temp$ ls -l\ntotal 16\n-rw-r--r-- 1 anuskp anuskp 1098 Feb 10 15:58 mdm-gateway.COMPANY.com.csr\n-rw------- 1 anuskp anuskp 1734 Feb 10 15:52 mdm-gateway.COMPANY.com.key\nAll information provided should be exactly the same as existing certificate's. Email should be set to support DL:Kafka - existing guideKeystores/Truststores should be in ssl_certs/{{ env }}/ssl/server.keystore.jksFor example: ssl_certs/prod/ssl/server.keystore.jksGo to some temporary directory and generate new Keystore:\nanuskp@CF-341562:~/temp$ keytool -genkeypair -alias kafka.mdm-gateway.COMPANY.com -keyalg RSA -keysize 2048 -keystore server.keystore.jks -dname "CN = kafka.mdm-gateway.COMPANY.com, O = COMPANY"\nEnter keystore <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=2031523">●●●●●●●●●●●●●●●●●●</a> new password:\nEnter key password for <kafka.mdm-gateway.COMPANY.com>\n (RETURN if same as keystore password):\n\nWarning:\nThe JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore server.keystore.jks -destkeystore server.keystore.jks -deststoretype pkcs12".\nKey password should be the same as keystore password. After the certificate has been switched, remember to save the new keystore password in inventory/{{ env }}/group_vars/kafka/secret.yml.In the -dname param insert the same parameters as existing certificate's.Generate CSR from the keystore:\nanuskp@CF-341562:~/temp$ keytool -certreq -alias kafka.mdm-gateway.COMPANY.com -file kafka.mdm-gateway.COMPANY.com.csr -keystore server.keystore.jks\nEnter keystore <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=2031525">●●●●●●●●●●●●●●●●●●●</a>\nThe JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore server.keystore.jks -destkeystore server.keystore.jks -deststoretype pkcs12".\nanuskp@CF-341562:~/temp$ ls -l\ntotal 8\n-rw-r--r-- 1 anuskp anuskp 1027 Feb 10 16:11 kafka.mdm-gateway.COMPANY.com.csr\n-rw-r--r-- 1 anuskp anuskp 2161 Feb 10 16:07 server.keystore.jks\nEFKEvery Elasticsearch node may have its own certificate:ssl_certs/prod/efk/esnode1/mdm-esnode1-gbl-trade-prod.COMPANY.com.cerssl_certs/prod/efk/esnode2/mdm-esnode2-gbl-trade-prod.COMPANY.com.cerssl_certs/prod/efk/esnode3/mdm-esnode3-gbl-trade-prod.COMPANY.com.cerThere is only one certificate for Kibana:ssl_certs/prod/efk/kibana/mdm-log-management-gbl-trade-prod.COMPANY.com.cerGenerating CSRs from existing .key files is exactly the same as for Kong. Remember to set parameters ("O", "L", "CN") exactly the same as existing certificate's."
},
{
"title": "Requesting a new certificate",
"pageID": "218454527",
"pageLink": "/display/GMDM/Requesting+a+new+certificate",
"content": "Go to https://requestmanager.COMPANY.com/. Search for Digital Certificates and click the first and only position found:COMPANY-issued certificatesCheck the COMPANY SSL Certificate - Internal Only checkbox.Copy-paste your CSR (How to generate CSR?) into the first window.Into the second window, copy-paste Subject Alternative Names from existing certificate:Put support DL (PROD support DL for Production certificates) and your own email in the third windowSet Would you like to submit an additional SSL Cert request? - NoClick Submit and wait for an email with the certificateEntrust-issued certificatesCheck the Entrust External SSL certificate checkbox and click the first link:You will be redirected to the Entrust portal. Check if renewing an existing certificate works. If it doesn't, follow below steps:Check Request a new certificate (SSL/TLS) and Submit:Choose Multi-Domain OVCopy-paste your CSR into the text field on the right. Make sure all details are correct.List Subject Alternative Names from existing certificate:Skip the OPTIONS page (it should be empty).On last page, fill in Project Owner's details. TRIPLE-CHECK EVERYTHING and click Submit.IMPORTANT: COMPANY email server filters out EXTERNAL emails sent to DLs. Do not put only your DL in the Additional Emails field.Wait for the email with new certificate from Entrust."
},
{
"title": "Rotating EFK certificates",
"pageID": "218454407",
"pageLink": "/display/GMDM/Rotating+EFK+certificates",
"content": "ElasticsearchSingle instance (non-prod clusters)Go to Elasticsearch config directory on host. For example:/app/efk/elasticsearch/config - US DEV (amraelp00005781.COMPANY.com)/apps/efk/elasticsearch/config - GBL DEV (euw1z1dl039.COMPANY.com)\n[mdm@euw1z1dl039 config]$ ls -l\ntotal 48\n-rw-rw-r-- 1 mdm 7000 1445 Feb 22 2019 admin-ca.pem\n-rw------- 1 mdm docker 1708 Jul 27 2020 elasticsearch-admin-key.pem\n-rw------- 1 mdm docker 1765 Jul 27 2020 elasticsearch-admin.pem\n-rw-rw---- 1 mdm docker 199 Mar 30 2020 elasticsearch.keystore\n-rw------- 1 mdm docker 1013 Jul 27 2020 elasticsearch.yml\n-rw------- 1 mdm docker 1704 Jul 27 2020 esnode-key.pem\n-rw------- 1 mdm docker 1801 Feb 9 05:00 esnode.pem\n-rw------- 1 mdm docker 3320 Mar 30 2020 jvm.options\n-rw------- 1 mdm docker 10899 Mar 30 2020 log4j2.properties\n-rw------- 1 mdm docker 1972 Jul 27 2020 root-ca.pem\nCheck the elasticsearch.yml config file. By default, esnode.pem should contain the certificate and esnode-key.pem should contain private key.If you have generated new CSR based on existing private key, you only need to update the esnode.pem file:\n[mdm@euw1z1dl039 config]$ vi esnode.pem\nRemove all file contents and copy-paste the new certificate. Save the changes.Now restart the container and make sure it's working and not throwing errors in the logs:\n[mdm@euw1z1dl039 config]$ docker restart elasticsearch\nelasticsearch\n[mdm@euw1z1dl039 config]$ docker logs --tail 100 -f elasticsearch\nLog into Kibana and check that dashboards are correctly displaying data.Clustered (production clusters)On every Elasticsearch node go to the Elasticsearch config directory and replace esnode.pem certificate file, as shown in 1a.Once done, restart all Elasticsearch instances. Check logs. All instances should throw the following error in logs:\n[2022-02-10T10:53:19,770][ERROR][c.f.s.a.BackendRegistry ] [prod-gbl-data-2] Not yet initialized (you may need to run sgadmin)\n[2022-02-10T10:53:19,798][ERROR][c.f.s.a.BackendRegistry ] [prod-gbl-data-2] Not yet initialized (you may need to run sgadmin)\nNow, run the following command on all hosts in Elasticsearch cluster:\ndocker exec elasticsearch bash -c "export JAVA_HOME=/usr/share/elasticsearch/jdk/ && cd /usr/share/elasticsearch/plugins/search-guard-7/tools && ./sgadmin.sh -cd ../sgconfig/ -h {{ elasticsearch_cluster_network_host }} -cn {{ elasticsearch_cluster_name }} -nhnv -cacert ../../../config/root-ca.pem -cert ../../../config/elasticsearch-admin.pem -key ../../../config/elasticsearch-admin-key.pem"\nwhere:{{ elasticsearch_cluster_network_host }} - instance's name in cluster, check in host_vars, for example (in configuration repository): mdm-hub-env-config/inventory/prod/host_vars/efk1/all.yml{{ elasticsearch_cluster_name }} - cluster name, is the same for all nodes, check in group_vars, for example: mdm-hub-env-config/inventory/prod/group_vars/efk-services/all.ymlSo, on example of GLOBAL PROD (2 clusters):Run the following on PROD4 (euw1z1pl025.COMPANY.com):\n[mdm@euw1z1pl025 config]$ docker exec elasticsearch bash -c "export JAVA_HOME=/usr/share/elasticsearch/jdk/ && cd /usr/share/elasticsearch/plugins/search-guard-7/tools && ./sgadmin.sh -cd ../sgconfig/ -h 'euw1z1pl025.COMPANY.com' -cn 'elasticsearch-prod-gbl-cluster' -nhnv -cacert ../../../config/root-ca.pem -cert ../../../config/elasticsearch-admin.pem -key ../../../config/elasticsearch-admin-key.pem"\nSearch Guard Admin v7\nWill connect to euw1z1pl025.COMPANY.com:9300 ... done\nConnected as CN=elasticsearch-admin.COMPANY.com,O=COMPANY\nElasticsearch Version: 7.6.2\nSearch Guard Version: 7.6.2-41.0.0\nContacting elasticsearch cluster 'elasticsearch-prod-gbl-cluster' and wait for YELLOW clusterstate ...\nClustername: elasticsearch-prod-gbl-cluster\nClusterstate: YELLOW\nNumber of nodes: 2\nNumber of data nodes: 2\nsearchguard index already exists, so we do not need to create one.\nINFO: searchguard index state is YELLOW, it seems you miss some replicas\nPopulate config from /usr/share/elasticsearch/plugins/search-guard-7/sgconfig\n../sgconfig/sg_action_groups.yml OK\n../sgconfig/sg_internal_users.yml OK\n../sgconfig/sg_roles.yml OK\n../sgconfig/sg_roles_mapping.yml OK\n../sgconfig/sg_config.yml OK\n../sgconfig/sg_tenants.yml OK\nWill update '_doc/config' with ../sgconfig/sg_config.yml\n SUCC: Configuration for 'config' created or updated\nWill update '_doc/roles' with ../sgconfig/sg_roles.yml\n SUCC: Configuration for 'roles' created or updated\nWill update '_doc/rolesmapping' with ../sgconfig/sg_roles_mapping.yml\n SUCC: Configuration for 'rolesmapping' created or updated\nWill update '_doc/internalusers' with ../sgconfig/sg_internal_users.yml\n SUCC: Configuration for 'internalusers' created or updated\nWill update '_doc/actiongroups' with ../sgconfig/sg_action_groups.yml\n SUCC: Configuration for 'actiongroups' created or updated\nWill update '_doc/tenants' with ../sgconfig/sg_tenants.yml\n SUCC: Configuration for 'tenants' created or updated\nDone with success\nRun the following on PROD5 (euw1z2pl024.COMPANY.com):\n[mdm@euw1z2pl024 config]$ docker exec elasticsearch bash -c "export JAVA_HOME=/usr/share/elasticsearch/jdk/ && cd /usr/share/elasticsearch/plugins/search-guard-7/tools && ./sgadmin.sh -cd ../sgconfig/ -h 'euw1z2pl024.COMPANY.com' -cn 'elasticsearch-prod-gbl-cluster' -nhnv -cacert ../../../config/root-ca.pem -cert ../../../config/elasticsearch-admin.pem -key ../../../config/elasticsearch-admin-key.pem"\nSearch Guard Admin v7\nWill connect to euw1z2pl024.COMPANY.com:9300 ... done\nConnected as CN=elasticsearch-admin.COMPANY.com,O=COMPANY\nElasticsearch Version: 7.6.2\nSearch Guard Version: 7.6.2-41.0.0\nContacting elasticsearch cluster 'elasticsearch-prod-gbl-cluster' and wait for YELLOW clusterstate ...\nClustername: elasticsearch-prod-gbl-cluster\nClusterstate: YELLOW\nNumber of nodes: 2\nNumber of data nodes: 2\nsearchguard index already exists, so we do not need to create one.\nINFO: searchguard index state is YELLOW, it seems you miss some replicas\nPopulate config from /usr/share/elasticsearch/plugins/search-guard-7/sgconfig\n../sgconfig/sg_action_groups.yml OK\n../sgconfig/sg_internal_users.yml OK\n../sgconfig/sg_roles.yml OK\n../sgconfig/sg_roles_mapping.yml OK\n../sgconfig/sg_config.yml OK\n../sgconfig/sg_tenants.yml OK\nWill update '_doc/config' with ../sgconfig/sg_config.yml\n SUCC: Configuration for 'config' created or updated\nWill update '_doc/roles' with ../sgconfig/sg_roles.yml\n SUCC: Configuration for 'roles' created or updated\nWill update '_doc/rolesmapping' with ../sgconfig/sg_roles_mapping.yml\n SUCC: Configuration for 'rolesmapping' created or updated\nWill update '_doc/internalusers' with ../sgconfig/sg_internal_users.yml\n SUCC: Configuration for 'internalusers' created or updated\nWill update '_doc/actiongroups' with ../sgconfig/sg_action_groups.yml\n SUCC: Configuration for 'actiongroups' created or updated\nWill update '_doc/tenants' with ../sgconfig/sg_tenants.yml\n SUCC: Configuration for 'tenants' created or updated\nDone with success\nCheck the logs. There should be no new errors. Check Kibana - whether you can login and view data in dashboards.KibanaGo to Kibana config directory on host. For example:/app/efk/kibana/config\n[root@amraelp00005781 config]# ls -l\ntotal 12\n-rw-r--r-- 1 mdmihnpr mdmihub 1964 Jul 10 2020 kibana.crt\n-rw-r--r-- 1 mdmihnpr mdmihub 1704 Jul 10 2020 kibana.key\n-rw-rwxr-- 1 mdmihnpr mdmihub 536 Jul 5 2020 kibana.yml\nModify the kibana.crt file. Remove its contents and copy-paste new certificate.\n[root@amraelp00005781 config]# vi kibana.crt\nDo the same for kibana.key, unless you have generated the CSR based on the existing private key.Restart the Kibana container and check logs:\n[root@amraelp00005781 config]# docker restart kibana\nkibana\n[root@amraelp00005781 config]# docker logs --tail 100 -f kibana\nWait for Kibana to come back up and make sure there are no errors in logs and you can login to web app and view data in dashboards.REMEMBER TO PUSH NEW CERTIFICATES TO CONFIGURATION REPO"
},
{
"title": "Rotating FLEX Kafka certificates",
"pageID": "387161356",
"pageLink": "/display/GMDM/Rotating+FLEX+Kafka+certificates",
"content": "Kafka FLEX certificate is the same as for the Kong FLEX1 Email to Santosh.If there is a need to rotate Kafka certificate on FLEX environment, approval from the business is required.To: santosh.dube@COMPANY.comCc: dl-atp_mdmhub_support@COMPANY.comHi Santosh,We created the RFC ticket in our Jira - <Link to the ticket>The FLEX PROD Kafka certificate is expiring, we need to go through the deployment procedure and replace the certificate on our Kafka.We prepared the following deployment procedure '<doc> added to attachment.Could you please approve this request because we need to trigger this deployment to replace the certificates.Let me know in case of any questions.Regards,Change the certificate:2. Check if CA cert has changed!IMPORTANT! If intermediate certificate changed, it would be required to contact FLEX team to replace it. To: DL-CBK-MAST@COMPANY.com anisha.sahu@COMPANY.com santosh.dube@COMPANY.comDear FLEX team,We are providing new client.trustore.jks file which should be changed from your side. The change was forced by the change in policy of providing new certificates and server retirement. Due to the new certificate is signed by the other intermediate CA there is a need to change client truststore.Please treat this as a high priority as the certificate will expire in 2 days.Kind regards,Remember to attach new client.truststore.jks file!It is not required to create additional email thread with client if there is a need to change only the certificate. 3. Rotate certificate3.1 create keystoreCreate new keystore with new key-pair. Private key should be in repository under mdm-hub-env-config/ssl_certs/prod_us/certs/mdm-ihub-us-trade-prod.COMPANY.com.key and certificate should be requested.Tools → import Key Pair → → PKCS #8 → → and than choose private key and certificates from directories in the repo.Passwords can be found under mdm-hub-env-config/inventory/prod_us/host_vars/kafka1/secret.yml3.2 Rotate certificates on machinesOnce done, log into host and go to /app/kafka/ssl.Back existing server.keystore.jks up:\n$ cp server.keystore.jks server.keystore.jks-backup\nAnd upload the modified server.keystore.jks.Restart Kafka container and wait for it to come back up:\n$ docker restart kafka_kafka_1\nReplace the keystore and restart Kafka container on each node.Wait for Kafka to come up and become fully operational before restarting next node. After certificate has been successfully rotated, push modified keystore to the mdm-hub-env-config repository. CER and CSR files are no longer useful and can be disposed of.Provide the evidence in the email thread:After the replacement evidence file should be sent:"
},
{
"title": "Rotating FLEX Kong certificates",
"pageID": "387161359",
"pageLink": "/display/GMDM/Rotating+FLEX+Kong+certificates",
"content": "Kafka certificate is the same as for the kongRotating FLEX Kong certificate.If there is a need to rotate Kafka certificate on FLEX environment, approval from the business is required.To: santosh.dube@COMPANY.comCc: dl-atp_mdmhub_support@COMPANY.comHi Santosh,We created the RFC ticket in our Jira - <Link to the ticket>The FLEX PROD Kong certificate is expiring, we need to go through the deployment procedure and replace the certificate on our Kong API gateway.We prepared the following deployment procedure '<doc> added to attachment.Could you please approve this request because we need to trigger this deployment to replace the certificates.Let me know in case of any questions.Regards,Change the certificate:!IMPORTANT! If intermediate certificate changed, it would be required to contact FLEX team to replace it. To: DL-CBK-MAST@COMPANY.com anisha.sahu@COMPANY.com santosh.dube@COMPANY.comDear FLEX team,We are providing new client.trustore.jks file which should be changed from your side. The change was forced by the change in policy of providing new certificates and server retirement. Due to the new certificate is signed by the other intermediate CA there is a need to change client truststore.Please treat this as a high priority as the certificate will expire in 2 days.Kind regards,Remember to attach new client.truststore.jks file!It is not required to create additional email thread with client if there is a need to change only the certificate. You should receive three certificates from COMPANY/Entrust: Server Certificate and Intermediate (PBACA G2) or Intermediate and Root. Open the Server Certificate in the text editor:Copy all received certificates into a chain in the following sequence:Server CertificateIntermediateRoot:Go to main directory with command line and ansible installedMake sure you are on master branch and have newest changes fetchedgit checkout mastergit pullComment out all sections in mdm-hub-env-config\\inventory\\prod_us\\group_vars\\kong\\all.yml except “kong_certificates”Comment out all sections in mdm-hub-env-config\\roles\\update_kong_api\\tasks\\main.yml except “Add Certificates”partExecute ansible playbook(Limit it to only one Kong host in the cluster)$ ansible-playbook update_kong_api.yml -i inventory/prod_us/inventory --vault-password-file=/home/karol/password --limit kong1Verify if server is responding with correct certificate openssl s_client -connect mdm-ihub-us-trade-prod.COMPANY.com:443 </dev/nullopenssl s_client -connect amraelp00006207.COMPANY.com:8443 </dev/null          openssl s_client -connect amraelp00006208.COMPANY.com:8443</dev/null          openssl s_client -connect amraelp00006209.COMPANY.com:8443</dev/nullProvide the evidence in the email thread:After the replacement evidence file should be sent:"
},
{
"title": "Rotating Kafka certificates",
"pageID": "229180645",
"pageLink": "/display/GMDM/Rotating+Kafka+certificates",
"content": "After receiving signed SSL certificate, place it in the same mdm-hub-env-config repo directory as existing Kafka keystore. For example:ssl_certs/prod/ssl/[server.keystore.jks] - for Global PRODAdd the certificate to keystore, using the command:\n$ keytool -importcert -alias kafka.mdm-gateway.COMPANY.com -file kafka.mdm-gateway.COMPANY.com.cer -keystore server.keystore.jks\nImportant: use the same alias as existing certificate in this keystore, to overwrite itOnce done, log into host and go to /app/kafka/ssl.Back existing server.keystore.jks up:\n$ cp server.keystore.jks server.keystore.jks-backup\nAnd upload the modified server.keystore.jks.Restart Kafka container and wait for it to come back up:\n$ docker restart kafka_kafka_1\nIf there are multiple Kafka instances (Production), replace the keystore and restart Kafka container on each node. Wait for Kafka to come up and become fully operational before restarting next node. You can check node availability using, for example, AKHQ.After certificate has been successfully rotated, push modified keystore to the mdm-hub-env-config repository. CER and CSR files are no longer useful and can be disposed of."
},
{
"title": "Rotating Kong certificate",
"pageID": "218453498",
"pageLink": "/display/GMDM/Rotating+Kong+certificate",
"content": "You should receive three certificates from COMPANY/Entrust: Server Certificate and Intermediate (PBACA G2) or Intermediate and Root. Open the Server Certificate in the text editor:Copy all received certificates into a chain in the following sequence:Server CertificateIntermediateRoot:Save the file as {hostname}.pem - for example mdm-gateway.COMPANY.com.pem and switch it in configuration repository:mdm-hub-env-config/ssl_certs/prod/certs/*Go to appropriate Kong group_vars:mdm-hub-env-config/inventory/prod/group_vars/kong_v1/kong.ymlMake sure all "create_or_update" flags are set to "False":Go down to #CERTIFICATES and switch the "create_or_update" flag. Path to the .pem file should not have changed - if you chose a different filename, adjust it here:Run the update_kong_api_v1.yml playbook. Limit it to only one Kong host in the cluster. After it has finished, switch the "create_or_update" flag back to "False" and push new certificate to the repository.$ ansible-playbook update_kong_api_v1.yml -i inventory/prod/inventory --vault-password-file=~/ap --limit kong_v1_01Check all SNIs on all Kong instances using s_client:$ openssl s_client -servername mdm-gateway-int.COMPANY.com -connect euw1z1pl017.COMPANY.com:8443$ openssl s_client -servername mdm-gateway-int.COMPANY.com -connect euw1z1pl021.COMPANY.com:8443$ openssl s_client -servername mdm-gateway-int.COMPANY.com -connect euw1z1pl022.COMPANY.com:8443$ openssl s_client -servername mdm-gateway.COMPANY.com -connect euw1z1pl017.COMPANY.com:8443..."
},
{
"title": "Hub upgrade procedures and calendar",
"pageID": "401611801",
"pageLink": "/display/GMDM/Hub+upgrade+procedures+and+calendar",
"content": "Backend components upgrade policyMajor upgrade once a yearPatch upgrades every quarterUpgrade tableComponentcurrent versionlatest upgrade datenewest patch releaseplanned patch upgrade datenewest stable releaseplanned major upgrade dateNotesPrometheus2.53.4 (monitoring host)2025-04-10--2.53.4-\n MR-10396\n -\n Getting issue details...\n STATUS\n kube-prometheus-stack61.7.22025-05--70.1.0-\n MR-9578\n -\n Getting issue details...\n STATUS\n Airflow2.7.22023-112.7.3-2.10.52025 Q2\n MR-10437\n -\n Getting issue details...\n STATUS\n Monstache6.7.212025-05--6.7.21-\n MR-10437\n -\n Getting issue details...\n STATUS\n Kong Gateway3.4.22024-09--3.9.02025 Q3Kong Ingress Controller3.2.02024-093.2.4-3.4.42025 Q3Kong external proxy3.3.12023-10--3.9.02025 Q3OpenJDK - AdoptOpenJDK11.0.14.1_12022(?)11.0.27_62025 Q2Temurin 17.0.15+6-LTS2025 Q3Jenkins2.462.32024-10--2.504.12025 Q3All versions newer than 2.462.3 require Java 17Consul1.16.22023-111.16.6-1.21.02025 Q2\n MR-10437\n -\n Getting issue details...\n STATUS\n Elasticsearch8.11.42024-02--9.0.12025 Q4Fluentd1.16.52024-051.16.8-1.182025 Q4Replace with Fluent Bit instead?Fluent Bit2.2.32025-02--4.0.12025 Q4Apache Kafka3.7.02024-073.7.22025 Q24.0.02026 Q1AKHQ0.23.02024-08--0.25.12026 Q1MongoDB6.0.212025-04--8.0.82026 Q2\n MR-10399\n -\n Getting issue details...\n STATUS\n "
},
{
"title": "Airflow upgrade procedure",
"pageID": "401611840",
"pageLink": "/display/GMDM/Airflow+upgrade+procedure",
"content": "IntroductionAirflow used by MDM HUB is maintained by Apache: https://airflow.apache.org/. To deploy airflow we are using official airflow helm chart: https://github.com/airflow-helm/chartsPrerequisiteVerify changelog for changes that could alter behaviour/usage in new version and plan configuration adjustments to make it work correctly.https://airflow.apache.org/docs/apache-airflow/stable/release_notes.htmlEnsure base images are mirrored to COMPANY artifactory.Generic procedureProcedure assumes that upgrade will be executed and tested on the SBX first.Upgrade StepsAirflow version upgradeApply changes in mdm-hub-inbound-services:Change airflow airflowVersion and defaultAirflowTagtag to updated version in:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/helm/airflow/src/main/helm/values.yamlChange airflow docker base image version in:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/browse/helm/airflow/docker/Dockerfile Apply other changes to helm chart if necessary (Prerequisite step 1)Apply configuration changes in mdm-hub-cluster-env:Apply needed changes to configuration if necessary (Prerequisite step 1)http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-cluster-env/browse/amer/sandbox/namespaces/airflow/values.yamlBuild and deploy changes with new configuration.Verify if the component is working properly:Check if component startedGo to the Airflow main page and verify if everything is working as expected (no log in issues, no errors, can see dags etc.)Check component logs for errorsCheck if all dags are working properlyFor dags with periodic schedule - wait for them to be triggered For dags executed from UI  - execute all of them with test data Airflow helm template upgradeDeploy current airflow version on local environment from mdm-hub-inboud-servicesGet current airflow helm manifest and save it to airflow_manifest_1.yaml\nhelm get manifest -n airflow airflow > airflow_manifest_1.yaml\nPull new airflow chart version from chart repostiroy and replace in aiflow/charts directory. Copy old chart version to some temporary directory outside repository for comparison\nhelm pull apache-airflow/airflow --version "1.13.0"\nmv airflow-1.13.0.tgz ${repo_dir}/mdm-hub-inbound-services/helm/airflow/src/main/helm/charts/airflow-1.13.0.tgz\nExtract old helm chart and check MODIFICATION_LIST file for modifiactions applied on helm chart. Apply needed changes to new airflow chart.\ntar -xzf airflow-1.10.0_modified.tgz\ncat airflow/MODIFICATION_LIST\nPerform helm upgrade with new helm chart version. Verify if airflow is working as expectedGet current airflow manifest and save it to airflow_manifest_2.yaml\nhelm get manifest -n airflow airflow > airflow_manifest_2.yaml\nCompare generated manifests and verify if there are breaking changesFix all issuesPast upgradesUpgrade Airflow x → yDescription:Procedure:Reference tickets:Reference PR's:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/pull-requests/1283/overview"
},
{
"title": "AKHQ upgrade procedure",
"pageID": "401611810",
"pageLink": "/display/GMDM/AKHQ+upgrade+procedure",
"content": "IntroductionAKHQ used in MDM HUB is mantained by tchiotludo/akhq.PrerequisiteVerify changelog for changes that could alter behaviour/usage in new version and plan configuration adjustments to make it work correctly.Ensure base images are mirrored to COMPANY artifactory.Generic procedureProcedure assumes that upgrade will be executed and tested on the SBX first.Upgrade StepsApply changes in mdm-hub-inbound-services:Change akhq image tag to updated version in:mdm-hub-inbound-services/helm/kafka/chart/src/main/helm/templates/akhq/akhq.yamlmdm-hub-inbound-services/helm/kafka/chart/src/main/helm/values.yamlApply other changes to helm chart if necessary (Prerequisite step 1)Apply configuration changes in mdm-hub-cluster-env:Change akhq image tag to updated version in mdm-hub-cluster-env/amer/sandbox/namespaces/amer-backend/values.yaml (example for SBX)Apply other changes to configuration if necessary (Prerequisite step 1)Build and deploy changes with new configuration.Verify if the component is working properly:Check if component startedGo to the AKHQ dashboard and verify if everything is working as expected (no log in issues, no errors, can see topics, consumergroups etc.)Check component logs for errorsPast upgradesUpgrade AKHQ 0.14.1 → 0.24.0 (0.23.0)Description:This update required upgrade to version 0.24.0. After checking changes between previous version and target version it become obvious that there are required additional changes to helm chart.There were detected errors during upgrade verification for which no fix was found in version 0.24.0. That resulted in changing version to 0.23.0, where the issue didn't occur.Procedure:Pushed base image to COMPANY artifactory: artifactory.COMPANY.com/mdmhub-docker-dev/tchiotludo/akhq:0.24.0Applied inbound-services changes:changed image tag to 0.24.0 in:akhq.yamlvalues.yamlApplied necessary changes to akhq-cm.yaml (based of changelog requirements):added micronaut configurationmoved topic-data property under ui-options propertyadjusted security configurationChanged image tag to 0.24.0 in cluster-env values.yamlBuild inbound-services changes and deployed them with new configuration on SBX environment.Verified if component is working:component startedthere was an error present after logging Inthere was an exception thrown in logs:java.lang.NullPointerException: null\nat org.akhq.repositories.AvroWireFormatConverter.convertValueToWireFormat(AvroWireFormatConverter.java:39)\n\tat org.akhq.repositories.RecordRepository.newRecord(RecordRepository.java:454)\n\tat org.akhq.repositories.RecordRepository.lambda$getLastRecord$3(RecordRepository.java:109)\n\tat java.base/java.lang.Iterable.forEach(Unknown Source)\n\tat org.akhq.repositories.RecordRepository.getLastRecord(RecordRepository.java:107)\n\tat org.akhq.controllers.TopicController.lastRecord(TopicController.java:224)\n\tat org.akhq.controllers.$TopicController$Definition$Exec.dispatch(Unknown Source)\n\tat io.micronaut.context.AbstractExecutableMethodsDefinition$DispatchedExecutableMethod.invoke(AbstractExecutableMethodsDefinition.java:351)\n\tat io.micronaut.context.DefaultBeanContext$4.invoke(DefaultBeanContext.java:583)\n\tat io.micronaut.web.router.AbstractRouteMatch.execute(AbstractRouteMatch.java:303)\n\tat io.micronaut.web.router.RouteMatch.execute(RouteMatch.java:111)\n\tat io.micronaut.http.context.ServerRequestContext.with(ServerRequestContext.java:103)\n\tat io.micronaut.http.server.RouteExecutor.lambda$executeRoute$14(RouteExecutor.java:656)\n\tat reactor.core.publisher.FluxDeferContextual.subscribe(FluxDeferContextual.java:49)\n\tat reactor.core.publisher.InternalFluxOperator.subscribe(InternalFluxOperator.java:62)\n\tat reactor.core.publisher.FluxSubscribeOn$SubscribeOnSubscriber.run(FluxSubscribeOn.java:194)\n\tat io.micronaut.reactive.reactor.instrument.ReactorInstrumentation.lambda$null$0(ReactorInstrumentation.java:62)\n\tat reactor.core.scheduler.WorkerTask.call(WorkerTask.java:84)\n\tat reactor.core.scheduler.WorkerTask.call(WorkerTask.java:37)\n\tat io.micrometer.core.instrument.composite.CompositeTimer.recordCallable(CompositeTimer.java:68)\n\tat io.micrometer.core.instrument.Timer.lambda$wrap$1(Timer.java:171)\n\tat io.micronaut.scheduling.instrument.InvocationInstrumenterWrappedCallable.call(InvocationInstrumenterWrappedCallable.java:53)\n\tat java.base/java.util.concurrent.FutureTask.run(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\tat java.base/java.lang.Thread.run(Unknown Source) \nFound no fix / workaround for this in 0.24.0 version, decided to change version to 0.23.0Applied inbound-services changes:changed image tag to 0.23.0 in:akhq.yamlvalues.yamlChanged image tag to 0.23.0 in cluster-env values.yamlBuild inbound-services changes and deployed them with new configuration on SBX environment.Verified if component is working:component startedno errors present on dashboard, everything is as expectedno errors in logsReference tickets:[MR-6778] Prepare AKHQ upgrade plan to version 0.24.0Reference PR's:[MR-6778] AKHQ upgraded to 0.23.0[MR-6778] SANDBOX: AKHQ version change to 0.23.0"
},
{
"title": "Consul upgrade procedure",
"pageID": "401611813",
"pageLink": "/display/GMDM/Consul+upgrade+procedure",
"content": "IntroductionConsul used in MDM is installed using official Consul Helm chart provided by Hashicorp.PrerequisiteBefore upgrade verify checklist:Consul - check changelog for deprecationsMDM Hub components and Hub Partners use REST API and to access Key/Value storage - make sure it worksDocker images are mirrored to COMPANY ArtifactoryGeneric procedureProcedure assumes that upgrade will be executed and tested on the SBX first.Upgrade steps:Upgrade Consul Helm chartUpgrade Consul Docker imagesUpdate this confluence pagePast upgradesUpgrade 1.10.2 → 1.16.2DescriptionThis was the only Consul upgrade so far.upgrade Consul chart to version 1.2.2upgrade Consul server to 1.16.2ProcedureUpgrade Consul Helm chartAdd Hashicorp Helm repo and find the newest Consul chart and app version\nhelm repo add hashicorp https://helm.releases.hashicorp.com\nhelm search repo hashicorp/consul\nIn helm/consul/src/main/helm/Chart.yaml uncomment repository and change version numberUpdate dependencies\ncd helm/consul/src/main/helm\nhelm dependency update\nComment repository line back in Chart.yamlCommit only the updated charts/consul-*.tgz and Chart.yaml filesUpgrade Consul Docker imagePull official images from Docker Hubhttps://hub.docker.com/r/hashicorp/consul/tagshttps://hub.docker.com/r/hashicorp/consul-k8s-control-plane/tagsTag images with artifactory.COMPANY.com/mdmhub-docker-dev/ prefixPush images to ArtifactoryUpdate cluster-env configuration (backend namespace)Change Docker image tags to uploaded in previous stepDeploy updated backendEnsure cluster is in a running stateReference tickets\n MR-7210\n -\n Getting issue details...\n STATUS\n \n MR-7211\n -\n Getting issue details...\n STATUS\n \n MR-7212\n -\n Getting issue details...\n STATUS\n Reference PRsPull Request #1395: [MR-7210] Upgrade Consul - Harmony-Bitbucket (COMPANY.com)Pull Request #1108: [MR-7210] Upgrade Consul - amer-sandbox - Harmony-Bitbucket (COMPANY.com)Pull Request #1153: [MR-7210] Upgrade Consul - amer-nprod, emea-nprod, apac-nprod - Harmony-Bitbucket (COMPANY.com)Pull Request #1176: [MR-7212] Upgrade Consul - amer-prod, emea-prod, apac-prod - Harmony-Bitbucket (COMPANY.com)"
},
{
"title": "Elastic stack upgrade",
"pageID": "401611843",
"pageLink": "/display/GMDM/Elastic+stack+upgrade",
"content": "Introduction:ECK stack used in MDM is installed using official ECK stack installation procedures provided by Elasticsearch B.V..PrerequisiteBefore upgrade verify checklist:Elasticsearch - check changelog for deprecationhttps://www.elastic.co/guide/en/elasticsearch/reference/current/es-release-notes.htmlKibana - check changelog for deprecationshttps://www.elastic.co/guide/en/kibana/current/release-notes.htmlLogstash - check changelog for deprecationshttps://www.elastic.co/guide/en/logstash/current/releasenotes.htmlFleetServer - check changelog for deprecationshttps://www.elastic.co/guide/en/fleet/current/release-notes.htmlAPM jar agents https://elastic.co/guide/en/apm/agent/java/current/release-notes.htmlDocker images are mirrored to COMPANY ArtifactoryGeneric procedureProcedure assumes that upgrade will be executed and tested on the SBX first.Upgrade Elastic stack steps:Upgrade Elasticsearch docker imageUpgrade Elasticsearch plugins and dependenciesUpgrade Kibana docker imageUpgrade Logstash docker imageUpgrade Logstash drivers and dependenciesUpgrade FleetServer docker imageUpgrade APM jar agentsUpdate this confluence pagePast upgradesECK operator installationUninstall olm ECK operator Scale down the number of olm-operator pods to 0Delete eck olm Subscription with orphan propagationkubectl delete subscription my-elastic-cloud-eck --cascade=orphan\nDelete all eck olm InstallPlans with orphan propagationkubectl delete installplans install-* --cascade=orphan\nDelete all "eck" ClusterServiceVersions with orphan propagationfor ns in $(kubectl get namespaces -o name | cut -c 11-);\ndo\necho $ns;\nkubectl delete csv elastic-cloud-eck.v2.10.0 -n $ns --cascade=orphan;\ndone\nScale down elastic-operator to 0Delete eck operator objects:ConfigMapsfor cm in $(kubectl get cm | awk '{if ($1 ~ "elastic-") print $1}');\ndo\n echo $cm;\n kubectl delete cm $cm --cascade=orphan;\ndone\nServiceAccountkubectl delete sa elastic-operator --cascade=orphan\nElastic operator certkubectl delete ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● --cascade=orphan\nClusterRole - everything with "elastic" in name besides elastic-agentfor cr in $(kubectl get clusterrole | grep -v elastic-agent | awk '{if ($1 ~ "elastic") print $1}')\ndo\n echo $cr;\n kubectl delete clusterrole $cr --cascade=orphan;\ndone\nServicekubectl delete service elastic-operator-service --cascade=orphan\nDeployment eck-operatorkubectl delete deployment eck-operatorInstall eck-operator standaloneAdjust labels and annotaions of CRDsfor CRD in $(kubectl get crds --no-headers -o custom-columns=NAME:.metadata.name | grep k8s.elastic.co); do\n echo "changing $CRD"; \n kubectl annotate crd "$CRD" meta.helm.sh/release-name="operators";\n kubectl annotate crd "$CRD" meta.helm.sh/release-namespace="operators";\n kubectl label crd "$CRD" app.kubernetes.io/managed-by=Helm;\ndone\nInstall eck-operator without OLM by deploying operators version 4.1.19-project-boldmove-SNAPSHOT or newerUpgrade ECK stackProcedure:Upgrade Elastic stack docker imagesPull from DockerHub and push the newest possible docker tags image of all Elastic stack components besides APM agentDownload from maver repo and push to artifactory maven gallery the newest jar of APM agentChange version tag in inbound-services repo of all Elastic stack componentsRepeat steps 3 - 5 in the following order:Elasticsearch - wait until all nodes are updated (shards relocation lasts long)KibanaLogstash and FleetServerUpdate cluster-env configuration (backend namespaces)Change Docker image tagDeploy updated backend with Jenkins jobEnsure backend component is working fineDeploy mdmhub to update APM agentsEnsure mdmhub components are working fineReference tickets: \n MR-8152\n -\n Getting issue details...\n STATUS\n "
},
{
"title": "Fluent Bit (Fluentbit) upgrade procedure",
"pageID": "401611834",
"pageLink": "/display/GMDM/Fluent+Bit+%28Fluentbit%29+upgrade+procedure",
"content": "Introduction:FluentBit used in MDM is installed using official Fleuntbit installtion proc provided by Cloud Native Computing Foundation.PrerequisiteBefore upgrade verify checklist:FluentBit - check changelog for deprecationshttps://docs.fluentbit.io/manual/installation/upgrade-notesDocker images are mirrored to COMPANY ArtifactoryGeneric procedureProcedure assumes that upgrade will be executed and tested on the SBX first.Upgrade steps:Upgrade Fluentbit Docker imagesUpdate this confluence pagePast upgradesUpgrade 1.8.11 → 2.2.2Description:This was the only Fluentbit upgrade so far.upgrade Fluentbit docker image to version 2.2.2Procedure:Upgrade Fluentbit docker imagePull from DockerHub and push the newest possible docker tag image of fluentbit-debug and fluentbit to artifactory.Change version tag in inbound-services repo of mdmhub fluentbit and kubevents fluentbit.Update cluster-env configuration (envs and backend namespaces)Change Docker image tags to uploaded in previous stepDeploy updated backend for kubevents and mdmhub for components logs with Jenkins jobsEnsure kubevents and mdmhub logs are being stored in Elasticsearch, check Kibanas.Reference tickets: \n MR-8094\n -\n Getting issue details...\n STATUS\n \n MR-8245\n -\n Getting issue details...\n STATUS\n \n MR-8344\n -\n Getting issue details...\n STATUS\n Reference PRs:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/pull-requests/1583/diff#newsfragments/MR-8094.fchange.md"
},
{
"title": "Fluentd upgrade procedure",
"pageID": "401611830",
"pageLink": "/display/GMDM/Fluentd+upgrade+procedure",
"content": "Introduction:Fluentd used in MDM is installed using official Fluentd installation procedures provided by Cloud Native Computing Foundation.PrerequisiteBefore upgrade verify checklist:Fluentd - check changelog for deprecationshttps://github.com/fluent/fluentd/blob/master/CHANGELOG.mdhttps://github.com/uken/fluent-plugin-elasticsearch/issues/937 - no go issue (currently we are using the highest elasticsearch-api 7.x.x version)Docker images are mirrored to COMPANY ArtifactoryGeneric procedureProcedure assumes that upgrade will be executed and tested on the SBX first.Upgrade steps:Upgrade Fluentd Docker imagesUpgrade Fluentd plugins and dependenciesUpdate this confluence pagePast upgradesUpgrade fluentd-kubernetes-daemonset - v1.12-debian-elasticsearch7-1 → v1.16.2-debian-elasticsearch7-1.1Procedure:Change docker image base to the newest version in env-config repo, (ex. "fluentd-kubernetes-daemonset:v1.16.2-debian-elasticsearch7-1.1")Build image with docker build job : https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm_manage_playbooks/job/Docker/job/build_Dockerfile/Update cluster-env repo configuration with the new image tag for fluentd (ex. 981)Test on SBXAfter checking fluentd output logs, the following actions were needed to be taken:upgrading of the following plugins and dependencies:"ruby-kafka", "~> 1.5""fluent-plugin-kafka", "0.19.2"defining new mappings in "backend" and "others" datastreams: "properties": {\n "kubernetes.labels.app": {\n "dynamic": true,\n "type": "object",\n "enabled": false\n }\nexecute ansible playbook with index template update rollover "backend" and "others" datastreams after mappings changeReference tickets: \n MR-8093\n -\n Getting issue details...\n STATUS\n \n MR-8097\n -\n Getting issue details...\n STATUS\n \n MR-8343\n -\n Getting issue details...\n STATUS\n "
},
{
"title": "Kafka clients upgrade procedure",
"pageID": "401611855",
"pageLink": "/display/GMDM/Kafka+clients+upgrade+procedure",
"content": "IntroductionThere are two tools that we need to take under consideration when upgrade'ing Kafka clients, both are managed by Confluent Inc.:cp-kcat (DockerHub: confluentinc/cp-kcat, GitHub: confluentinc/kafkacat-images)cp-kafka (DockerHub: confluentinc/cp-kafka, GitHub: confluentinc/kafka-images)PrerequisiteBefore proceeding with upgrade verify checklist:Verify changelogs for changes that could alter behaviour/usage of updated tools and decide the steps to take to ensure the components will work correctly after update (eg. check if there is a need for adjustments of wrapper scripts present on our images).Ensure base images are mirrored to COMPANY artifactory.Generic procedureProcedure assumes that upgrade will be executed and tested on the SBX first.Upgrade Stepscp-kcat:Change image tag in mdm-hub-inbound-services/helm/kafka/kcat/docker/Dockerfile.Build and deploy changes.Verify if container is working correctly.Verify if all wrapper scripts included in mdm-hub-inbound-services/helm/kafka/kcat/docker/bin are running correctly.cp-kafka:Change image tag in mdm-hub-inbound-services/helm/kafka/kafka-client/docker/Dockerfile.Build and deploy changes.Verify if container is working correctly.Verify if all wrapper scripts included in mdm-hub-inbound-services/helm/kafka/kafka-client/docker/bin are running correctly.Past upgradesUpgrade cp-kcat 7.30→ 7.5.2 and cp-kafka 6.1.0→7.5.2Description:This update require to update both cp-kcat and cp-kafka to version 7.5.2 to eliminate CVE-2023-4911 vulnerability.Procedure:Pushed base images for updated components to COMPANY artifactory:confluentinc/cp-kcat:7.5.2 →  artifactory.COMPANY.com/mdmhub-docker-dev/mdmtools/confluentinc/cp-kcat:7.5.2confluentinc/cp-kafka:7.5.2 → artifactory.COMPANY.com/mdmhub-docker-dev/confluentinc/cp-kafka:7.5.2Changed images versions in Dockerfiles:cp-kcat 7.30→ 7.5.2cp-kafka 6.1.0→7.5.2Built changes and deployed on SBX environment.Verified that both containers started successfully.Executed into each container and tested if all wrapper scripts present at /opt/app/bin are running and returning expected results.Deployed changes to other environments.Reference tickets:[MR-7910] Update Confluentinc cp-kcat and cp-kafka to 7.5.2Reference PR's:[MR-7910] Updated kcat and cp-kafka base images to v7.5.2."
},
{
"title": "Kafka upgrade procedure",
"pageID": "401611803",
"pageLink": "/display/GMDM/Kafka+upgrade+procedure",
"content": "IntroductionKafka used in MDM is installed, configured and upgraded using Strimzi Kafka OperatorPrerequisiteBefore upgrade verify checklist:There must be no critical errors for the environment Alerts MonitoringKafka Cluster Overview must  show 0 for Under-Replicated PartitionsUnder-Min-ISR PartitionsOffline PartitionsUnclean Leader ElectionPreferred Replica Imbalance >0 is not a blocker, but a high number may indicate an issue with Kafka performance.Generic procedureProcedure assumes that upgrade will be executed and tested on the SBX first.Upgrade steps:Verify if Strimzi Kafka Operator supports Kafka version you want to install (Supported versions - https://strimzi.io/downloads/)if not, upgrade Strimzi chart firstChange Kafka version in environment configurationUpdate this confluence pagePast upgradesUpgrade 3.6.1 → 3.7.0 and ZK to KRaft migrationDescriptionThis upgrade was part of the \n MR-8004\n -\n Getting issue details...\n STATUS\n Epic.upgrade Strimzi Kafka operator chart to version 0.41.0upgrade Kafka to version 3.7.0apply strimzi CRDs (important!)ZooKeeper to KRaft migrationProcedureUpgrade Strimzi operator to the version supporting Kafka 3.6.1Add Strimzi Helm repo and find the newest Consul chart and app version\nhelm repo add strimzi https://strimzi.io/charts\nhelm search repo strimzi/strimzi-kafka-operator\nIn helm/operators/src/main/helm/Chart.yaml uncomment Strimzi repository and change version numberUpdate dependencies\ncd helm/operators/src/main/helm\nhelm dependency update\nComment repository line back in Chart.yamlCommit only the updated charts/strimzi-kafka-operator-helm-*.tgz and Chart.yaml filesUpgrade default Kafka to 3.7.0 in mdm-hub-inbound-servicesUpgrade Kafka per environmentDeploy updated operators with the new StrimziUpdate cluster-env configuration (backend namespace)Deploy updated backendEnsure cluster is in a running stateReference tickets\n MR-9004\n -\n Getting issue details...\n STATUS\n \n MR-9019\n -\n Getting issue details...\n STATUS\n Reference PRs[MR-9019] Upgrade stimzi kafka operator to version 0.41.0 and Kafka to version 3.7.0Upgrade 3.5.1 → 3.6.1DescriptionThis upgrade was part of the \n MR-8004\n -\n Getting issue details...\n STATUS\n Epic.upgrade Strimzi Kafka operator chart to version 0.39.0upgrade Kafka to version 3.6.1change in the entityOperator configration was requiredchange in Kafka Connect configuration was requiredProcedureUpgrade Strimzi operator to the version supporting Kafka 3.6.1Add Strimzi Helm repo and find the newest Consul chart and app version\nhelm repo add strimzi https://strimzi.io/charts\nhelm search repo strimzi/strimzi-kafka-operator\nIn helm/operators/src/main/helm/Chart.yaml uncomment Strimzi repository and change version numberUpdate dependencies\ncd helm/operators/src/main/helm\nhelm dependency update\nComment repository line back in Chart.yamlCommit only the updated charts/strimzi-kafka-operator-helm-*.tgz and Chart.yaml filesUpgrade default Kafka to 3.6.1 in mdm-hub-inbound-serviceschange Kafka config and wait for the operator to apply changes:remove inter.broker.protocol.version: "3.5"remove log.message.format.version: "3.5"set kafka.version: 3.6.1Upgrade Kafka per environmentDeploy updated operators with the new Strimzi strimziUpdate cluster-env configuration (backend namespace)Deploy updated backendEnsure cluster is in a running stateReference tickets\n MR-7408\n -\n Getting issue details...\n STATUS\n \n MR-7900\n -\n Getting issue details...\n STATUS\n \n MR-8146\n -\n Getting issue details...\n STATUS\n Reference PRs[MR-7900] Upgrade stimzi kafka operator to version 0.39.0 and Kafka to 3.6.1[MR-7900] Kafka - enable template change for entityOperator[MR-7900] Upgrade Kafka to 3.6.1 - amer sandbox[MR-7900] Upgrade Kafka to 3.6.1 - nprods[MR-7900] Remove forbidden and ignored Kafka connect configuration - nprods[MR-8146] Prepare for Kafka upgrade on prod[MR-8146] Upgrade Kafka to 3.6.1 - prods"
},
{
"title": "Kong upgrade procedure",
"pageID": "401611825",
"pageLink": "/display/GMDM/Kong+upgrade+procedure",
"content": "IntroductionKong used in MDM HUB is mantained by Kong/kong.PrerequisiteVerify changelog for changes that could alter behaviour/usage in new version and plan configuration adjustments to make it work correctly.Ensure base images are mirrored to COMPANY artifactory.Generic ProcedureProcedure assumes that upgrade will be executed and tested on the SBX first.Upgrade StepsChange image tag to updated version in mdm-hub-env-config/docker/kong3/DockerfileBuild and push docker image based on updated Dockerfile.Change the tag of kong image in mdm-inbound-services/helm/kong/src/main/helm/values.yaml to the one that was build in Step 2.Change the tag of kong image in mdm-cluster-env/helm/amer/sandbox/namespaces/kong/values.yaml to the one that was build in Step 2.Build changes from Step 3 and deploy with configuration added in Step 4.Verify update:Check if component started.Check if API requests are accepted and return correct responsesCheck if kong-mdm-external-oauth-plugin works properly (try OAuth authorization and then some API calls to verify it)Past upgradesUpgrade Kong 3.2.2 → 3.4.2Description:This update required update to version 3.4.2 to fix the CVE-2023-4911 vulnerability on NPROD and PROD.Procedure:Changed image tag to 3.4.2 in mdm-hub-env-config/docker/kong3/DockerfileBuilt and pushed docker image to artifactory.Changed the tag of kong image in mdm-inbound-services/helm/kong/src/main/helm/values.yaml to the one that was build in Step 2 (951).Changed the tag of kong image in mdm-cluster-env/helm/{tenant}/{nprod|prod}/namespaces/kong/values.yaml to the one that was build in Step 2 (951).Built changes from Step 3 and deploy with configuration added in Step 4.Verified update:Component started.API requests were accepted and returned correct responseskong-mdm-external-oauth-plugin worked properly (checked OAuth and some API requests)Reference Tickets:[MR-7599] Update kong to 3.4.2Reference PR's:[MR-7599] Updated kong to 3.4.2[MR-7599] Updated kong to 3.4.2"
},
{
"title": "Mongo upgrade procedure",
"pageID": "401611849",
"pageLink": "/display/GMDM/Mongo+upgrade+procedure",
"content": "Introduction:Mongo used in MDM is managed by mongodb-kubernetes-operator. When updating mongo, we must think about all components at the same time.Mongo operator bring additional images to orchestrate and managed mongo cluster PrerequisiteBefore migration verify checklist:MongoDB Kubernetes operator is compatible with target mongo version.Components (mongo clients) are compatible with target mongo version (e.g: java mongo driver)Affected components:MDM services Monstache Airflow DAGs images are mirrored to COMPANY artifactoryGeneric procedureProcedure assumes that upgrade will be executed and tested on the SBX first.Upgrade steps:Verify if MongoDB Kubernetes operator documentation provides specific for planned upgrade Upgrade Mongo OperatorUpdate cluster-env configuration (operators namespace)Deploy new OperatorEnsure if cluster is in running state  Upgrade Mongo Update cluster-env configuration (backend namespace) Deploy updated backend NOTE: step a and b can be execute multiple times (first we upgrade mongo images then we updated featureCompatibilityVersion parameter) Ensure if cluster is in running state   Update confluence pagePast upgradesUpgrade 4.2.6 → 6.0.9Description:This upgrade required multiple intermediate upgrades without upgrading Mongo Kubernetes Operator Procedure:Upgrade image 4.2.6 → 4.4.24 by updating cluster-env configuration (backend namespace)Deploy updated backendEnsure if cluster is in running state  Upgrade featureCompatibilityVersion to 4.4 by updating cluster-env configuration (backend namespace)Deploy updated backendEnsure if cluster is in running state Upgrade image 4.4.24  → 5.0.20 by updating cluster-env configuration (backend namespace)Deploy updated backendEnsure if cluster is in running state  Upgrade featureCompatibilityVersion to 5.0 by updating cluster-env configuration (backend namespace)Deploy updated backendEnsure if cluster is in running state Upgrade image  5.0.20 → 6.0.9 by updating cluster-env configuration (backend namespace)Deploy updated backendEnsure if cluster is in running state  Upgrade featureCompatibilityVersion to 6.0 by updating cluster-env configuration (backend namespace)Deploy updated backendEnsure if cluster is in running state Reference tickets: [MR-7662] Deploy on PRODs - Jira (COMPANY.com)Reference PRs:Pull Request #1230: MR-7662 APAC PROD mongo upgrade 4.4 - Harmony-Bitbucket (COMPANY.com)Pull Request #1231: MR-7662 APAC PROD mongo upgrade 4.4 featureCompatibilityVersion 4.4 - Harmony-Bitbucket (COMPANY.com)Pull Request #1232: MR-7662 APAC PROD mongo upgrade 5.0 - Harmony-Bitbucket (COMPANY.com)Pull Request #1233: MR-7662 APAC PROD mongo upgrade 5.0 featureCompatibilityVersion 5.0 - Harmony-Bitbucket (COMPANY.com)Pull Request #1234: MR-7662 APAC PROD mongo upgrade 6.0 - Harmony-Bitbucket (COMPANY.com)Pull Request #1235: MR-7662 APAC PROD mongo upgrade 6.0 featureCompatibilityVersion 6.0 - Harmony-Bitbucket (COMPANY.com)Upgrade Operator 0.7.3 → 0.8.2 Description:This upgrade was required to enable mongo horizon feature. Previous version of operator was unstable and sometimes failed to complete reconciliation of mongo cluster. Mongo itself was no updated in this upgradeProcedure:Update cluster-env configuration (operators namespace)Deploy new OperatorEnsure if cluster is in running state  Reference tickets: [MR-5502] Mongo Horizons: Deploy changes to PRODs - Jira (COMPANY.com)Reference PRs:Pull Request #1281: MR-5502 APAC PROD mongo operator upgrade - Harmony-Bitbucket (COMPANY.com)Upgrade 6.0.9 → 6.0.11Description:This upgrade required only upgrading mongo image. At this time there was no newer version of mongodb Kubernetes operator. Procedure:Update cluster-env configuration (backend namespace)Deploy updated backendEnsure if cluster is in running state  Reference tickets: [MR-8029] NPROD: Upgrade mongo to 6.0.11 - Jira (COMPANY.com)[MR-8076] PRODs: Upgrade mongo to 6.0.11 - Jira (COMPANY.com)Reference PRs:Pull Request #1356: MR-8029 mongo upgrade to 6.0.11 - APAC - Harmony-Bitbucket (COMPANY.com)Pull Request #1357: MR-8029 mongo upgrade to 6.0.11 - AMER - Harmony-Bitbucket (COMPANY.com)Pull Request #1358: MR-8029 mongo upgrade to 6.0.11 - EMEA - Harmony-Bitbucket (COMPANY.com)Pull Request #1383: MR-8076 mongo upgrade to 6.0.11 - AMER PROD - Harmony-Bitbucket (COMPANY.com)Pull Request #1382: MR-8076 mongo upgrade to 6.0.11 - EMEA PROD - Harmony-Bitbucket (COMPANY.com)Pull Request #1384: MR-8076 mongo upgrade to 6.0.11 - APAC PROD - Harmony-Bitbucket (COMPANY.com)Upgrade 6.0.11 → 6.0.21DescriptionThis was planned periodic upgrade. During this upgrade also kubernetes mongo operator was upgraded from 0.8.2 to 0.12.0. To perform this upgrade there was change needed in MongoDBCommunity helm template. We were using users configuration in wrong way - uniqueness constraint on  scramCredentialsSecretName field was violated Procedure:Deploy backend with new code version ( changed MongoDBCommunity helm template ) - PR Merge configuration change with mongo operator and mongo version change Deploy operators (Mongo is being restarted)Check cluster state - mongo operato, mongo and component logsDeploy backend (Mongo is being restarted - upgrade)Check cluster state - mongo operato, mongo and component logsReference tickets\n MR-10399\n -\n Getting issue details...\n STATUS\n Reference PRsCode changeConfig change - EMEA NPRODConfig change - APAC NPRODConfig change - AMER NPRODConfig change - AMER SBXMongoDBCommunity"
},
{
"title": "Monstache upgrade procedure",
"pageID": "401611821",
"pageLink": "/display/GMDM/Monstache+upgrade+procedure",
"content": "Introduction:Monstache used in MDM is installed using official Monstache installation procedure provided by Ryan Wynn.PrerequisiteBefore upgrade verify checklist:Monstache - check changelog for deprecationshttps://github.com/rwynn/monstache/releasesDocker images are mirrored to COMPANY ArtifactoryGeneric procedureProcedure assumes that upgrade will be executed and tested on the SBX first.Upgrade steps:Upgrade Monstache Docker imagesUpdate this confluence pagePast upgradesUpgrade 6.7.0 → 6.7.17Description:This was the only Monstache upgrade so far.upgrade Monstache docker image to version 6.7.17Procedure:Upgrade Monstache docker imagePull from DockerHub and push the newest possible docker tag image of monstache to artifactory.Change version tag in inbound-services repo of monstache.Update cluster-env configuration (envs and backend namespaces)Change Docker image tags to uploaded in previous stepDeploy updated backend with Jenkins jobEnsure monstache is working fine, check logs on monstache Pod logs dir.Reference tickets: \n MR-8246\n -\n Getting issue details...\n STATUS\n \n MR-8097\n -\n Getting issue details...\n STATUS\n \n MR-8345\n -\n Getting issue details...\n STATUS\n Upgrade 6.7.17 → 6.7.21Description:Upgrade Monstache docker image to version 6.7.21Procedure:Upgrade Monstache docker imagePull from DockerHub and push the newest possible docker tag image of monstache to artifactory.Change version tag in inbound-services repo of monstache.Update cluster-env configuration (envs and backend namespaces)Change Docker image tags to uploaded in previous stepDeploy updated backend with Jenkins jobEnsure monstache is working fine, check logs on monstache Pod logs dir. PASSEDReference tickets: \n MR-10486\n -\n Getting issue details...\n STATUS\n \n MR-10493\n -\n Getting issue details...\n STATUS\n \n MR-10494\n -\n Getting issue details...\n STATUS\n "
},
{
"title": "Prometheus upgrade procedure",
"pageID": "521705242",
"pageLink": "/display/GMDM/Prometheus+upgrade+procedure",
"content": "Monitoring hostIntroductionOfficial Prometheus site: https://prometheus.io/To deploy Prometheus we use official docker image: https://hub.docker.com/r/prom/prometheus/PrerequisitesVerify CHANGELOG for changes that could alter behaviour/usage in new version and plan configuration adjustments to make it work correctly.Verify if other monitoring components are in versions compatible with version to which prometheus is upgraded. List of components to check:ThanosTelegrafSQS ExporterS3 ExporterNode ExporterKarmaGrafanaDNS ExportercAdvisorBlackbox ExporterAlertmanagerEnsure base images are mirrored to COMPANY artifactory.Generic ProcedureUpgrade stepsApply configuration changes in mdm-hub-cluster-env:Change prometheus image tag to updated version in mdm-hub-cluster-env/ansible/roles/install_monitoring_prometheus/defaults/main.ymlApply other changes to configuration if necessary (Prerequisites step 1)Upgrade dependant monitoring components if necessary (Prerequisites step 2)Install monitoring stack using ansible-playbook:ansible-playbook install_monitoring_stack.yml -i inventory/monitoring/inventory --vault-password-file=$VAULT_PASSWORD_FILE\nVerify installation: Check if monitoring components are up and runningCheck logsCheck metrics and dashboardsFix all issuesPast UpgradesUpgrade monitoring host Prometheus v2.30.3 → v2.53.4Description:This upgrade was a huge change in Prometheus version, therefore also Thanos had to be updated from main-2023-11-03-7e879c6 to v0.37.2 to maintain compatibility between those components. Some additional configuration adjustments had to be made on Thanos side during this upgrade.Procedure:Checked prerequisitesVerified that no breaking changes were made made in Prometheus that would require configuration adjustments on our side.Verified that alongside Prometheus, Thanos have to be updated to v0.37.2 to keep compatibilityPushed Prometheus v2.53.4 and Thanos v.0.37.2 to COMPANY artifactory.Changed Prometheus tag to v2.53.4 and Thanos tag to v0.37.2 in mdm-hub-cluster-env/ansible/roles/install_monitoring_prometheus/defaults/main.ymlInstalled monitoring stack using ansible-playbookVerified installation - noticed issues with Thanos Query that couldn't connected to Thanos Sidecar and Thanos StoreMade adjustments in Thanos configuration to fix those issues (See reference PR)Installed monitoring stack using ansible-playbook againVerified installation - all components, dashboards and metrics were working correctlyUpgrade finished successfullyReference Tickets:[MR-10396] Upgrade Prometheus and Thanos on the monitoring hostReference PR's:Pull Request #2435: [MR-10396] Upgraded prometheus to v2.53.4 & thanos to v0.37.2K8s clusterIntroductionTo deploy Prometheus on k8s clusters we use the following chart: kube-prometheus-stack.It contains definition of Prometheus and related crd's.PrerequisitesCheck which chart version uses Prometheus in version to which you want to upgrade. Verify Prometheus CHANGELOG and kube-prometheus-stack chart templates and default values for changes that could alter behaviour/usage in new version and plan configuration adjustments to make it work correctly.Generic ProcedureUpgrade StepsDownload and unpack kube-prometheus-stack-<new_version>Replace CRD's:cd kube-prometheus-stack\\charts\\crds\\crds\nkubectl -n monitoring replace -f "*.yaml"Create and build PR with helm chart upgradeupdate version in mdm-hub-inbound-services/helm/monitoring/src/main/helm/Chart.yamlupdate package version replacing charts/kube-prometheus-stack-<old_version>.tgz with charts/kube-prometheus-stack-<new_version>.tgzDeploy PR to SBX clusterVerify installation and merge the PRGet the number of metrics and alerts from Prometheus and compare them with the number before upgradeVerify if Grafana dashboards are working correctlyProceed to NPROD/PROD deployments (Verify installation after each of them)Past UpgradesUpgrade monitoring host Prometheus v2.39.1 → v2.53.1Description:To perform this upgrade it was necessary to upgrade used helm chart (kube-prometheus-stack) from v41.7.4 (containing Prometheus v2.39.1) to v61.7.2 (containing Prometheus v.2.53.1)Procedure:Checked prerequisitesVerified that no breaking changes were made made in Prometheus that would require configuration adjustments on our side.Verified that kube-prometheus-stack v61.7.2 contained Prometheus v2.53.1Downloaded and unpacked kube-prometheus-stack-61.7.2.tgzReplaced CRD'sCreated PR with upgraded chart version and replaced old package with kube-prometheus-stack-61.7.4.tgz (See reference PR)Deployed changes to SBX from PRVerified Installation (SBX)No lost metricsAll alerts correctGrafana dashboards working correctlyMerged PRReference Tickets:[MR-10398] SBX: Upgrade Prometheus K8sReference PR's:Pull Request #3417: [MR-10398] Upgraded monitoring helm chart version to 61.7.2"
},
{
"title": "Infrastructure",
"pageID": "302705566",
"pageLink": "/display/GMDM/Infrastructure",
"content": ""
},
{
"title": "How to access AWS Console",
"pageID": "310939854",
"pageLink": "/display/GMDM/How+to+access+AWS+Console",
"content": "Add new user access to AWS AccountRequest access to the correct Security Group in the Request Managerhttps://requestmanager1.COMPANY.com/Group/Default.aspxie, for accessing the 432817204314 Account using the WBS-EUW1-GBICC-ALLENV-RO-SSO role, use the WBS-EUW1-GBICC-ALLENV-RO-SSO_432817204314_PFE-AWS-PROD Security GroupAWS ConsoleAlways use this AWS Console address: http://awsprodv2.COMPANY.com/ and there select the Account you want to use"
},
{
"title": "How to login to hosts with SSH",
"pageID": "310940209",
"pageLink": "/display/GMDM/How+to+login+to+hosts+with+SSH",
"content": "Generate a SSH key pair - private and publicCopy the public key to the ~/.ssh/authorized_keys file on the host and account you want to useuse ssh command to login, ie. ssh ec2-user@euw1z2dl115.COMPANY.comList the content of the ~/.ssh/authorized_keys file to check which keys are used"
},
{
"title": "How to restart the EC2 instance",
"pageID": "310940306",
"pageLink": "/display/GMDM/How+to+restart+the+EC2+instance",
"content": "Login to AWS Console (How to access AWS Console)Select EC2 Service from the search boxIn the navigation pane, choose Instances.Select the instance and choose Instance state, Reboot instance.Alternatively, select the instance and choose Actions, Manage instance state. In the screen that opens, choose Reboot, and then Change state.Choose Reboot when prompted for confirmationMore: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-reboot.html"
},
{
"title": "HUB-UI: Timeout issue after authorization",
"pageID": "337840086",
"pageLink": "/display/GMDM/HUB-UI%3A+Timeout+issue+after+authorization",
"content": "Issue description:When accesing HUB-UI site, after successfuly authorizing via SSO, the timeout may occur when trying to access the site.Solution:Check if you have valid COMPANY certificates installed in your browser. You can do that by clicking on padlock icon in browser search and checking if the connection is safe:If not, you have to install certificates:Install RootCA-G2.cer:Double-click on certificateChoose Install CertificateLocal MachineChoose "Place all certificates in the following store" and select store: "Trusted Root Certification Authorities"Click finish to complete the instalation processInstall PBACA-G2.cer:Double-click on certificateChoose Install CertificateLocal MachineChoose "Automatically select the certificate store based on type of certificate"Click finish to complete the instalation processReboot computerVerify by accessing HUB-UI"
},
{
"title": "Key Auth Not Working on Hosts - Fix",
"pageID": "172294447",
"pageLink": "/display/GMDM/Key+Auth+Not+Working+on+Hosts+-+Fix",
"content": "In case you are unable to use SSH authentication via RSA key, the cause might be wrong /home/{user}/.ssh directory context.Check /var/log/secure:The "maximum authentication attempts exceeded" error might indicate that his is the case.Check the /home/{user}/.ssh directory with the "-Z" option:$ ls -laZ /home/{user}/.sshOn the screen above is an example of wrong context. Fix it by:$ chcon -R system_u:object_r:usr_t:s0 /home/{user}/.sshVerify the context has changed:"
},
{
"title": "Kubernetes Operations",
"pageID": "228923667",
"pageLink": "/display/GMDM/Kubernetes+Operations",
"content": ""
},
{
"title": "Kubernetes upgrades",
"pageID": "337842009",
"pageLink": "/display/GMDM/Kubernetes+upgrades",
"content": "IntroductionKubernetes clusters provided by PDKS are upgraded quarterly. To make sure it doesn't break MDM Hub, we've established the process described in this article.K8s upgrade process in the PDKS platformVerify MDM Hub's compatibility with the new K8s versionkube-no-troubleUpgrades are done 1 version up, ie. 1.23 → 1.24, so we need to make sure we've not using any APIs removed in the upgraded version.To find all objects using deprecated API, run kube-no-trouble If there are "Deprecated APIs" listed for the next K8s version, MDM Hub's team must provide upgrades.In the example, an upgrade from 1.23 to 1.24 doesn't require any work.Upgrade sandbox/non-prod/prod clustersPDKS does a rolling upgrade of all nodes, starting with Control Plane, then dynamic (or "flex") nodes, and then the static nodes.Assist and verifyMDM Hub's team support during prod upgradesMDM Hub's team presence and assistance are required during prod upgrades. During the agreed upgrade window one designated person must be actively monitoring the upgrade process and react if issues are found."
},
{
"title": "MongoDB backup and restore",
"pageID": "322548514",
"pageLink": "/display/GMDM/MongoDB+backup+and+restore",
"content": "IntroductionPercona Backup for MongoDBWe are using Percona Backup for MongoDB (PBM) - an open-source and distributed solution for consistent backups and restore of production MongoDB clusters. PBM functions used in MDM Hub are marked in green.How are backups done in MDM Hub?ArchitectureThe solution was built in 4 partspbm-agent container - each MongoDB pod has been extended by adding a sidecar container - it handles all backup/restore operationsmongodb-pbm-config - k8s job applies pbm configuration stored in a ConfigMap every deploymentmongodb-pbm-client - k8s deployment provides a pod with ready-to-use pbm command line interfacemongodb-pbm-full-backup - k8s cronjob - runs backup in a configured scheduleCodepbm-agent - helm/mongo/src/main/helm/templates/mongo.yamlmongodb-pbm-config - helm/mongo/src/main/helm/templates/mongodb-pbm-config.yamlmongodb-pbm-client - helm/mongo/src/main/helm/templates/mongodb-pbm-client.yamlmongodb-pbm-full-backup - helm/mongo/src/main/helm/templates/mongodb-pbm-full-backup.yamlConfigurationGeneral rules Full backup every weekendIncremental (Point-in-time recovery) backup every 10 minutesDetailsConfig is stored per environment in mdm-hub-cluster-env project in {env}/prod/namespaces/{env}-backend/values.yaml path, under mongo.pbm key.Where are backups stored?All backups are stored in separate s3 buckets.AMER Prod - pfe-atp-us-e1-prod-mdmhub-backupamrasp202207120808/amer/archive/mongoAPAC Prod - pfe-atp-ap-se1-prod-mdmhub-backuaspasp202207141502/apac/archive/mongoEMEA Prod - pfe-atp-eu-w1-prod-mdmhub-backupemaasp202207120811/emea/archive/mongoBackupHow to do a manual full backup?Run a pbm backup --wait command in a mongodb-pbm-client podHow to do an incremental backup?You don't have to do anything. If you really need to do an incremental backup, wait for 10 minutes for the next scheduled point-in-time backup.RestoreHow to restore DB when it's empty - Disaster Recovery (DR) scenarioPercona configuration is stored in the database itself. If the database is completely removed (EKS cluster, PVCs, or all data from DB), the Percona agent won't be able to restore the DB from backup. You need at least an empty MongoDB and PBM configuration restored.Deploy MDM Hub Backend Using Jenkins JobAn empty database will be createdPercona will be configuredpbm-agent pod will be createdChoose between preferred restore waysfull backupincremental backupHow to restore DB from a full backupShut down all MongoDB clients - MDM Hub componentsDisable PITR$ pbm config --set pitr.enabled=falseRun pbm list to get a named list of backupsRun pbm restore [<backup_name>]Run pbm status to check the current restore statusAfter a successful restore, enable PITR back$ pbm config --set pitr.enabled=trueHow to restore DB from an incremental (Point-in-time Recovery)Shut down all MongoDB clients - MDM Hub componentsDisable PITR$ pbm config --set pitr.enabled=falseRun pbm list to get an available time range for the PITR restoreRun pbm restore  --time=2006-01-02T15:04:05Run pbm status to check the current restore statusAfter a successful restore, enable PITR back$ pbm config --set pitr.enabled=true"
},
{
"title": "Restart service",
"pageID": "228923671",
"pageLink": "/display/GMDM/Restart+service",
"content": "To restart MDMHUB service you have to have access to the Kubernetes console:Find the pod name that you want to restart: kubectl get pods --namespace {{mdmhub env namespace}}raselek@CF-0YVKSY:~/kafka/amer_dev/kafka_client$ kubectl get pods --namespace amer-devNAME                                                 READY   STATUS    RESTARTS   AGEmdmhub-batch-service-dbbf4486d-snpgc                 2/2     Running   0          22hmdmhub-callback-service-55c6dd696d-5bn4h             2/2     Running   0          22hmdmhub-entity-enricher-f9f884f97-cwqqc               2/2     Running   0          22hmdmhub-event-publisher-756b46cfd7-7ccqp              2/2     Running   0          22hmdmhub-mdm-api-router-9b9596f8b-8wqrn                2/2     Running   0          9hmdmhub-mdm-manager-678764db5-fqlzf                   2/2     Running   0          9hmdmhub-mdm-reconciliation-service-66b65c7bf8-jhvhv   2/2     Running   0          9hmdmhub-reltio-subscriber-6495fb4878-c8hp5            2/2     Running   0          9h2. Delete the pod that you selected: kubectl delete pod {{selected pod name}} --namespace {{mdmhub env namespace}}raselek@CF-0YVKSY:~/kafka/amer_dev/kafka_client$ kubectl delete pod mdmhub-mdm-reconciliation-service-66b65c7bf8-jhvhv --namespace amer-devpod "mdmhub-mdm-reconciliation-service-66b65c7bf8-jhvhv" deleted3. After above operation you will be able to see newly created pod:raselek@CF-0YVKSY:~/kafka/amer_dev/kafka_client$ kubectl get pods --namespace amer-devNAME                                                 READY   STATUS    RESTARTS   AGEmdmhub-batch-service-dbbf4486d-snpgc                 2/2     Running   0          22hmdmhub-callback-service-55c6dd696d-5bn4h             2/2     Running   0          22hmdmhub-entity-enricher-f9f884f97-cwqqc               2/2     Running   0          22hmdmhub-event-publisher-756b46cfd7-7ccqp              2/2     Running   0          22hmdmhub-mdm-api-router-9b9596f8b-8wqrn                2/2     Running   0          9hmdmhub-mdm-manager-678764db5-fqlzf                   2/2     Running   0          9hmdmhub-mdm-reconciliation-service-66b65c7bf8-ns88k   2/2     Running   0          2m32smdmhub-reltio-subscriber-6495fb4878-c8hp5            2/2     Running   0          9hIt's restarted instance."
},
{
"title": "Scaling services",
"pageID": "228923952",
"pageLink": "/display/GMDM/Scaling+services",
"content": "To do this action access to the runtime configuration repository is required. You have to modify deployment configuration for selected component - let's assume that it is mdm-reconciliation-service:Modify values.yaml for MDMHUB environment {{region}}/{{cluster class}}/namespaces/{{mdmhub env name}}/values.yaml:components:  registry: artifactory.COMPANY.com/mdmhub-docker-dev  deployments:    mdm_reconciliation_service:      enabled: true      replicas: 2      hostAliases: *hostAliases      resources:        component:          requests:            memory: "2560Mi"            cpu: "200m"          limits:            memory: "3840Mi"            cpu: "4000m"      logging: *loggingAnd change the value of the "replicas" parameter. If it doesn't exist you have to add this to the component deployment configuration.2. Commit and push changes,3. Go to Jenkins job responsible for deploying changes to the selected environment and run the job,4. After deploying check if the configuration has been applied correctly: kubectl get pods --namespace {{mdmhub env name}}:raselek@CF-0YVKSY:~/kafka/amer_dev/kafka_client$ kubectl get pods --namespace amer-devNAME                                                 READY   STATUS    RESTARTS   AGEmdmhub-batch-service-dbbf4486d-snpgc                 2/2     Running   0          22hmdmhub-callback-service-55c6dd696d-5bn4h             2/2     Running   0          22hmdmhub-entity-enricher-f9f884f97-cwqqc               2/2     Running   0          22hmdmhub-event-publisher-756b46cfd7-7ccqp              2/2     Running   0          22hmdmhub-mdm-api-router-9b9596f8b-8wqrn                2/2     Running   0          9hmdmhub-mdm-manager-678764db5-fqlzf                   2/2     Running   0          9hmdmhub-mdm-reconciliation-service-66b65c7bf8-ns88k   2/2     Running   0          2m32smdmhub-mdm-reconciliation-service-66b68c7bf8-ndksk   2/2     Running   0          2m32smdmhub-reltio-subscriber-6495fb4878-c8hp5            2/2     Running   0          9hYou will be able to see the desired amount of pods."
},
{
"title": "Stop/Start service",
"pageID": "228923678",
"pageLink": "/pages/viewpage.action?pageId=228923678",
"content": "To do this action access to the runtime configuration repository is required. Start/Stop service means enable/disable component deployment. You have to modify deployment configuration for selected component - let's assume that it is mdm-reconciliation-service:Modify values.yaml for MDMHUB environment {{region}}/{{cluster class}}/namespaces/{{mdmhub env name}}/values.yaml:components:  registry: artifactory.COMPANY.com/mdmhub-docker-dev  deployments:    mdm_reconciliation_service:      enabled: true      hostAliases: *hostAliases      resources:        component:          requests:            memory: "2560Mi"            cpu: "200m"          limits:            memory: "3840Mi"            cpu: "4000m"      logging: *loggingChange the enabled flag to false.2. Commit and push changes,3. Go to Jenkins job responsible for deploying changes to the selected environment and run the job,4. After deploying check if the configuration has been applied correctly: kubectl get pods --namespace {{mdmhub env name}}raselek@CF-0YVKSY:~/kafka/amer_dev/kafka_client$ kubectl get pods --namespace amer-devNAME                                                 READY   STATUS    RESTARTS   AGEmdmhub-batch-service-dbbf4486d-snpgc                 2/2     Running   0          22hmdmhub-callback-service-55c6dd696d-5bn4h             2/2     Running   0          22hmdmhub-entity-enricher-f9f884f97-cwqqc               2/2     Running   0          22hmdmhub-event-publisher-756b46cfd7-7ccqp              2/2     Running   0          22hmdmhub-mdm-api-router-9b9596f8b-8wqrn                2/2     Running   0          9hmdmhub-mdm-manager-678764db5-fqlzf                   2/2     Running   0          9hmdmhub-reltio-subscriber-6495fb4878-c8hp5            2/2     Running   0          9hThere should not be any active pods of the disabled component.To enable service you have to do the same steps but remember that "enabled" flag should be set to true."
},
{
"title": "Open Traffic from Outside COMPANY to MDM Hub",
"pageID": "250142861",
"pageLink": "/display/GMDM/Open+Traffic+from+Outside+COMPANY+to+MDM+Hub",
"content": "EMEA NProdAWS Account ID: 432817204314VPC ID: vpc-004cb58768e3c8459SecurityGroup: sg-04d4116a040a7e1da - MDMHub-kafka-and-api-proxy-external-nprod-sgProxy documentation: EMEA External proxyEMEA ProdAWS Account ID: 432817204314VPC ID: vpc-004cb58768e3c8459SecurityGroup: sg-06305fd9d3b0992a6 - MDMHub-kafka-and-api-proxy-external-prod-sgProxy documentation: EMEA External proxyEXUS (GBL) ProdAWS Account ID: 432817204314VPC ID: vpc-004cb58768e3c8459SecurityGroup: sg-0cd8ba02f6351f383 - Mdm-reltio-internet-traffic-SGUSno whitelisting"
},
{
"title": "Replace S3 Keys",
"pageID": "187796851",
"pageLink": "/display/GMDM/Replace+S3+Keys",
"content": "CREATE ticket if there is an issue with KEYs (rotation required -  expired)REQUEST:http://btondemand.COMPANY.com/getsupport#!/g71h1sgv0/0QUEUE: GBL-BTI-IOD AWS FULL SUPPORTHi Team,Our S3 access key expired - I am receiving - The AWS Access Key Id you provided does not exists in our records.KEY details:BucketName User name Access key ID Secret access keygblmdmhubnprodamrasp100762 SRVC-MDMGBLFT ●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●Could you please regenerate this S3 key ?Regards,MikolajBITBUCKET REPLACE:inventory/<env>_gblus/group_vars/all/secret.ymlREPLACE and Post replace tasks:REPLACE: 1. decrypt - group_vars/all/secret.yml2. replace on non-prod and prod 3. encrypt and pushPost Replace TASK: NON PROD NEW nonprod <KEY> <SECRET>REDEPLOY 1. Airflow:All Airflow jobs - https://jenkins-gbicomcloud.COMPANY.com/job/MDM_Airflow_Deploy_jobs/ (take list from airflow_components variable)- dev: concat_s3_files,merge_unmerge_entities_gblus,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_koloneview,reconciliation_snowflake,reconciliation_icue,import_merges_from_reltio,export_merges_from_reltio_to_s3_full,export_merges_from_reltio_to_s3_inc- qa: concat_s3_files,merge_unmerge_entities_gblus,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_koloneview,reconciliation_snowflake,reconciliation_icue,import_merges_from_reltio,export_merges_from_reltio_to_s3_full,export_merges_from_reltio_to_s3_inc- stage: merge_unmerge_entities_gblus,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_koloneview,reconciliation_snowflake,reconciliation_icue,import_merges_from_reltio,export_merges_from_reltio_to_s3_full,export_merges_from_reltio_to_s3_inc2. FLEX connector to S3 DEV AND QA- replace in kafka-connect-flex:/app/kafka-connect-flex/<env>/config/s3-connector-config.json:/app/kafka-connect-flex/<env>/config/s3-connector-config-update.jsonUpdate on Main(check logs with errors and execute)- curl -X GET http://localhost:8083/connectors/S3SinkConnector/config- curl -X PUT -H "Content-Type: application/json" localhost:8083/connectors/S3SinkConnector/config -d @/etc/kafka/config/s3-connector-config-update.json- curl -X POST http://localhost:8083/connectors/S3SinkConnector/tasks/0/restart- curl -X POST http://localhost:8083/connectors/S3SinkConnector/restart - curl -X GET http://localhost:8083/connectors/S3SinkConnector/status3. Snowflake:--changeset warecp:LOV_DATA_STG runOnChange:truecreate or replace stage landing.LOV_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/dev/outbound/SNOWFLAKE'credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true)create or replace stage landing.LOV_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/qa/outbound/SNOWFLAKE'credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true)create or replace stage landing.LOV_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/stage/outbound/SNOWFLAKE'credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true)--changeset morawm03:MERGE_TREE_DATA_STG runOnChange:truecreate or replace stage landing.MERGE_TREE_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/dev/outbound/SNOWFLAKE_MERGE_TREE'credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true COMPRESSION= 'GZIP')create or replace stage landing.MERGE_TREE_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/qa/outbound/SNOWFLAKE_MERGE_TREE'credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true COMPRESSION= 'GZIP')create or replace stage landing.MERGE_TREE_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/s3://gblmdmhubnprodamrasp100762/us/dev/outbound/SNOWFLAKE_MERGE_TREE/outbound/SNOWFLAKE_MERGE_TREE'credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true COMPRESSION= 'GZIP')--changeset warecp:reconcilation_URL runOnChange:truecreate or replace stage customer.RECONCILIATION_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/dev/inbound/hub/reconciliation/SNOWFLAKE/'credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')FILE_FORMAT = ( TYPE = CSV FIELD_DELIMITER = ',' COMPRESSION=NONE )create or replace stage customer.RECONCILIATION_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/qa/inbound/hub/reconciliation/SNOWFLAKE/'credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')FILE_FORMAT = ( TYPE = CSV FIELD_DELIMITER = ',' COMPRESSION=NONE )create or replace stage customer.RECONCILIATION_DATA_STG url='s3://gblmdmhubnprodamrasp100762/us/stage/inbound/hub/reconciliation/SNOWFLAKE/'credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')FILE_FORMAT = ( TYPE = CSV FIELD_DELIMITER = ',' COMPRESSION=NONE )PROD:NEW prod <KEY> <SECRET>REDEPLOY 1. Airflow:All Airflow jobs - https://jenkins-gbicomcloud.COMPANY.com/job/MDM_Airflow_Deploy_jobs/job/deploy_mdmgw_airflow_services__prod_gblus/ (take list from airflow_components variable)- prod: concat_s3_files,merge_unmerge_entities_gblus,hub_reconciliation_v2,lookup_values_export_to_s3,reconciliation_koloneview,reconciliation_snowflake,reconciliation_icue,export_merges_from_reltio_to_s3_full,export_merges_from_reltio_to_s3_inc               Manulay replace connections and variables in http://amraelp00007847.COMPANY.com:9110/airflow/home for gblus prod DAGS2. FLEX connector to S3- replace in kafka-connect-flex (on Master only):/app/kafka-connect-flex/prod/config/s3-connector-config.json:/app/kafka-connect-flex/prod/config/s3-connector-config-update.jsonUpdate on Main(check logs with errors and execute)- curl -X GET http://localhost:8083/connectors/S3SinkConnector/config- curl -X PUT -H "Content-Type: application/json" localhost:8083/connectors/S3SinkConnector/config -d @/etc/kafka/config/s3-connector-config-update.json- curl -X POST http://localhost:8083/connectors/S3SinkConnector/tasks/0/restart- curl -X POST http://localhost:8083/connectors/S3SinkConnector/restart - curl -X GET http://localhost:8083/connectors/S3SinkConnector/status3. Snowflake:--changeset warecp:LOV_DATA_STG runOnChange:truecreate or replace stage landing.LOV_DATA_STG url='s3://gblmdmhubprodamrasp101478/us/prod/outbound/SNOWFLAKE'credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true)--changeset morawm03:MERGE_TREE_DATA_STG runOnChange:truecreate or replace stage landing.MERGE_TREE_DATA_STG url='s3://gblmdmhubprodamrasp101478/us/prod/outbound/SNOWFLAKE_MERGE_TREE'credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')FILE_FORMAT=(TYPE= 'JSON' STRIP_OUTER_ARRAY= true COMPRESSION= 'GZIP')--changeset warecp:reconcilation_URL runOnChange:truecreate or replace stage customer.RECONCILIATION_DATA_STG url='s3://gblmdmhubprodamrasp101478/us/prod/inbound/hub/reconciliation/SNOWFLAKE/'credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET>')FILE_FORMAT = ( TYPE = CSV FIELD_DELIMITER = ',' COMPRESSION=NONE )4. HOST:- replace archiver-serviceson 3 nodes::/app/archiver/.s3cfg:/app/archiver/config/archiver.env"
},
{
"title": "Resize PV, LV, FS",
"pageID": "164470164",
"pageLink": "/display/GMDM/Resize+PV%2C+LV%2C+FS",
"content": "\nsudo pvresize /dev/nvme2n1\nsudo lvextend -L +<SIZE_TO_INCREASE>G /dev/mapper/docker-thinpool\nExtention lvm using additional disk.\nsudo pvcreate /dev/nvme3n1 \nsudo vgextend mdm_vg /dev/nvme3n1\nsudo lvm lvextend -l +100%FREE /dev/mdm_vg/data\nsudo xfs_growfs -d /dev/mapper/mdm_vg-data\n"
},
{
"title": "Resolve Docker Issues After Instance Restart (Flex US)",
"pageID": "163927016",
"pageLink": "/pages/viewpage.action?pageId=163927016",
"content": "After restarting one of the US FLEX instances, issues with service user mdmihpr/mdmihnpr may come up.Resolve them using the following:Change owner of the Docker socket[root@amraelp00005781 run]# cd /var/run/[root@amraelp00005781 run]# chown root:mdmihub docker.sockIncrease VM memoryIf the ElasticSearch is not starting:[root@amraelp00005781 run]# sysctl -w vm.max_map_count=262144Reset offset on EFK topicsIf there are no logs on Kibana, use the Kafka Client to reset offsets on efk topics using the "--to-datetime" option, pointing to 6 months prior.Prune the DockerIf there is a ThinPool Error coming up, use:[root@amraelp00005781 run]# docker system prune -a"
},
{
"title": "Service User ●●●●●●●●●●●●●●●● [https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588321]",
"pageID": "194547472",
"pageLink": "/pages/viewpage.action?pageId=194547472",
"content": "Log into the machine via other account with root access.For service user mdm (GBL NPROD/PROD):\n$ chage -I -1 -m 0 -M 99999 -E -1 mdm\n"
},
{
"title": "Jenkins",
"pageID": "250676213",
"pageLink": "/display/GMDM/Jenkins",
"content": ""
},
{
"title": "Proxy on bitbucket-insightsnow.COMPANY.com (fix Hostname issue and timeouts)",
"pageID": "250147973",
"pageLink": "/pages/viewpage.action?pageId=250147973",
"content": "On GBLUS DEV host amraelp00007335.COMPANY.com (●●●●●●●●●●●●) setup service and route to proxy bitbucket:kong_services: #----------------------DEV--------------------------- - create_or_update: False vars: name: "{{ kong_env }}-bitbucket-proxy" url: "http://bitbucket-insightsnow.COMPANY.com/" connect_timeout: 120000 write_timeout: 120000 read_timeout: 120000kong_routes: #----------------------DEV--------------------------- - create_or_update: False vars: name: "{{ kong_env }}-bitbucket-proxy-route" service: "{{ kong_env }}-bitbucket-proxy" paths: [ "/" ] methods: [ "GET", "POST", "PATCH", "DELETE" ]Then we can access Bitbucket through:curl https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/repos?visibility=publicChange is in the and currently deplyed: http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-env-config/browse/inventory/dev_gblus/group_vars/kong_v1/kong_dev.yml-----------------------------------------------------------------------------------------------------------------Next setup the nginx proxy to route 80 port to 8443 port.Go to ec2-user@gbinexuscd01:/opt/cd-env/bitbucket-proxyRUN bitbucket-nginx:dded05295c16        nginx:1.17.3                                                          "nginx -g 'daemon of…"   About an hour ago   Up 16 minutes           0.0.0.0:80->80/tcp                                            bitbucket-nginxConfig:\nhttp {\n    server {\n        listen              80;\n        server_name         gbinexuscd01;\n\n        location / {\n            rewrite ^\\/(.*) /$1 break;\n            proxy_pass  https://gbl-mdm-hub-us-nprod.COMPANY.com:8443;\n            resolver <a href="https://confluence.COMPANY.com/plugins/servlet/pii4conf/pii?id=1588839">●●●●●●●●●●</a>;\n        }\n    }\n}\n\nevents {}\nThis config will route port 80 to gbl-mdm-hub-us-nprod.COMPANY.com:8443 host to bitbucketNext, add to all Jenkins and Jenkins-Slaves the following entry in /etc/hosts:docker exec -it -u root jenkins bashdocker exec -it -u root nexus_jenkins_slave2 bashdocker exec -it -u root nexus_jenkins_slave bashvi /etc/hostsadd:●●●●●●●●●●●●● bitbucket-insightsnow.COMPANY.comwhere ●●●●●●●●●●●●● is a IP of bitbucket-nginxto check rundocker inspect bitbucket-nginx"Gateway": "192.168.128.1",Then check on each Slave and Jenkins:curl http://bitbucket-insightsnow.COMPANY.com/repos?visibility=publicYou should receive the HTML page response."
},
{
"title": "Unable to Find Valid Certification Path to Requested Target (GBLUS)",
"pageID": "164470045",
"pageLink": "/pages/viewpage.action?pageId=164470045",
"content": "The following issue is caused by missing COMPANY - PBACA-G2.cer and RootCA-G2.cer in the java cacerts file.Issue:06:41:54 2020-12-24 06:41:52.843 INFO --- [ Thread-4] c.consol.citrus.report.LoggingReporter : FAILURE: Caused by: ResourceAccessException: I/O error on POST request for "https://gbl-mdm-hub-us-nprod.COMPANY.com:8443/apidev/hcp": sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target; nested exception is javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested targethttps://jenkins-gbicomcloud.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/project%252Ffletcher/151/console Solution:Log in to:mapr@gbinexuscd01 - ●●●●●●●●●●●●●docker exec -it nexus_jenkins_slave bashcd /etc/ssl/certs/javatouch PBACA-G2.cer  - PBACA-G2.certouch RootCA-G2.cer  - RootCA-G2.cerkeytool -importcert -trustcacerts -keystore cacerts -alias COMPANYInter -file PBACA-G2.cer -storepass changeitkeytool -importcert -trustcacerts -keystore cacerts -alias COMPANYRoot -file RootCA-G2.cer -storepass changeitnext - docker exec -it nexus_jenkins_slave2 bashPermanent Solution. TODO:add PBACA-G2.cer and RootCA-G2.cer to /etc/ssl/certs/java/cacerts in Dockerfile:COPY certs/PBACA-G2.cer /etc/ssl/certs/java/PBACA-G2.cerCOPY certs/RootCA-G2.cer /etc/ssl/certs/java/RootCA-G2.cerRUN cd /etc/ssl/certs/java && keytool -importcert -trustcacerts -keystore cacerts -alias COMPANYInter -file PBACA-G2.cer -storepass changeit -nopromptRUN cd /etc/ssl/certs/java && keytool -importcert -trustcacerts -keystore cacerts -alias COMPANYRoot -file RootCA-G2.cer -storepass changeit -nopromptfix - nexus_jenkins_slave2 and nexus_jenkins_slave"
},
{
"title": "Monitoring",
"pageID": "411343429",
"pageLink": "/display/GMDM/Monitoring",
"content": ""
},
{
"title": "FLEX: Monitoring Batch Loads",
"pageID": "513737976",
"pageLink": "/display/GMDM/FLEX%3A+Monitoring+Batch+Loads",
"content": "Opening The DashboardUse one of links below:PROD dashboard: https://mdm-log-management-us-trade-prod.COMPANY.com:5601/app/kibana#/dashboard/prod-batch-loads-dashboardTEST dashboard: https://mdm-log-management-us-trade-nonprod.COMPANY.com:5601/app/kibana#/dashboard/test-batch-loads-dashboardDEV dashboard: https://mdm-log-management-us-trade-nonprod.COMPANY.com:5601/app/kibana#/dashboard/dev-batch-loads-dashboardNavigating The DashboardUse the selector in upper right corner to change the time range (for example Last 24 hours or Last 7 days).The search bar allows searching for a specific file name.The dashboard is divided into 5 main sections:File by type - how many files of each input type have been loaded. File types are: SAP, DEA, HIN, FLEX_340B, IDENTIFIERS, ADDRESSES, FLEX_BULK.File load status count - breakdown of each file type and final status of records from that fileFile load count - depiction of loads through timeFile load summary - most important section, containing detailed information about each loaded file:File - file typeStart time/End time - start and end of file processing. Important note: this applies only to parsing, preprocessing and mapping the records - those are later loaded into Reltio asynchronouslyFile nameStatus - indicates that the file processing has finished correctly, without interruption or failuresLoad timeBad Records - records that could not be parsed or mapped, usually due to malformed inputInput Entities - number of records (lines) that the file containedProcessed Entities - number of individual profiles extracted from the file. This number may be lower than Input Entities, for example due to input model requiring aggregation of multiple lines (SAP), skipping unchanged records (DEA) etc.Created - number of profiles that were identified as missing from MDM and have been passed to ReltioUpdated - number of profiles that were identified as changed since last loaded and have been passed to ReltioPost Processing - Only for DEA - number of profiles that are present in MDM, but were not present in the DEA file. In this case, the records will be deleted in MDM (but there is a limit of 22,000 deleted profiles per single file - security mechanism)Skipped Entities - number of profiles that were not updated in Reltio, because their data has not changed since the last load. This is detected using records' checksums, calculated for each record while processing the file. Checksums are stored in MDM Hub's cache and compared with the future recordsSuspended Entities - Only for DEA - number of profiles that could have been deleted from MDM, but were not due to the 22,000 delete limit being exceededCountResponse status load summary - final statuses of loading the records into Reltio. Records are loaded asynchronously and their statuses are being gradually updated in this section, after the file is present in the File load summary section"
},
{
"title": "Quality Gateway Alerts",
"pageID": "438317787",
"pageLink": "/display/GMDM/Quality+Gateway+Alerts",
"content": "Quality Gateway is MDM Hub's publishing layer framework responsible for detecting Data Quality issues before publishing an event downstream (to Kafka consumers or Snowflake). You can find more details on the Quality Gateway in the documentation: Quality Gateway - Event Publishing FilterThere are 4 statuses that an event (entity/relationship) can receive after being processed by the Quality Gateway:OK - event passed all quality rulesBROKEN - event did not pass one or more quality rules. If at least one of these quality rules is HARD, then the event will not be published. Regardless of quality rules' types, the entity/relationship will be saved in Hub's cache (MongoDB qualityRejects collection).AUTO_RESOLVED - event passed all quality Rules, but its entity/relationship was found in Hub's cache. As a result, the record will be removed from the cache.MANUALLY_RESOLVED - same as above, but the newest event was created by reconciliation.AUTO_RESOLVED events mean that they were preceded by a BROKEN one, which signifies potential data problems or processing problems.This is why we have implemented two alerts to track these statuses, which may be otherwise missed.quality_gateway_auto_resolved_sum/quality_gateway_auto_resolved_eventBoth alerts should be approached similarly, as it is expected that they always get triggered together and tell us about the same thing.Pick an example from one of the quality_gateway_auto_resolved_event alerts and take the entity/relationship URI:Use Kibana's HUB Events dashboard to find all the recent events for this URI:If you find no events at first, try extending the time range (for example 7 days).Scroll down to the event list and open each event. Under metadata.quality.* keys you will find Quality Gateway info:Find the first BROKEN event. Under metadata.quality.issues you will find the list of quality rules that this event did not pass. Quality rules from this list match quality Rules configured in the Event Publisher's config.Example repository config file path (amer-prod): mdm-hub-cluster-env\\amer\\prod\\namespaces\\amer-prod\\config_files\\event-publisher\\config\\application.ymlQuality rules are expressions written in Groovy. Every event passing the appliesTo filter must also pass the mustPass filter, otherwise it will be BROKEN.Records in BROKEN state are saved in MongoDB along with the full event that triggered the rejection. For AUTO_RESOLVED and MANUALLY_RESOLVED it is a bit more tricky - record is no longer in MongoDB.To find the exact event that triggered the rejection you can use the AKHQ - Publisher's and QualityGateway's input Kafka topic is ${env}-internal-reltio-proc-event. Keep in mind that the retention configured for this topic should be around 7 days - events older than that get automatically removed from the topic.Search by the entity/relationship URI in Key. Match the BROKEN event with Kibana by the timestamp.There is an infinite number of ways in which an event can be broken, so some investigation will often be needed.Most common cases until now:Blank ProfileDescription: when fetching the entity JSON through Postman, the JSON has no attributes, but entity is not inactive.This is not expected and should be reported to the COMPANY MDM Team.RDM Temporary FailureDescription: all lookup attribute values in the entity JSON are having lookupErrors. At least one lookupCode per JSON is expected (unless there are no lookup attributes).Good:Bad:This is not expected and should be reported to the COMPANY MDM Team.For extra points, find the exact API request/response to which Reltio responded with lookupErrors and add it to the ticket. You can find the request/response in Kibana's component logs (Discover > amer-prod-mdmhub) in MDM Manager's logs - POST entitites/_byUris."
},
{
"title": "Thanos",
"pageID": "411343433",
"pageLink": "/display/GMDM/Thanos",
"content": "\n\n\n\nComponents:Thanos stack is running on monitoring host: amraelp00020595.COMPANY.com under /app/monitoring/prometheus/ orchestrated with docker-compose:\n-bash-4.2$ docker-compose ps \nNAME IMAGE COMMAND SERVICE CREATED STATUS PORTS\nbucket_web       artifactory.p:main-7e879c6 "/bin/thanos tools b…" bucket_web 3 weeks ago Up 2 seconds \ncompactor        artifactory.p:main-7e879c6 "/bin/thanos compact…" compactor 44 hours ago Up 44 hours \nprometheus       artifactory.p...:v2.30.3 "/bin/prometheus --c…" prometheus 3 weeks ago Up 3 weeks 0.0.0.0:9090->9090/tcp, ...\nquery            artifactory.p:main-7e879c6 "/bin/thanos query -…" query 3 weeks ago Up 3 weeks \nquery_frontend   artifactory.p:main-7e879c6 "/bin/thanos query-f…" query_frontend 3 weeks ago Up 3 weeks \nrule             artifactory.p:main-7e879c6 "/bin/thanos rule --…" rule 3 weeks ago Up 3 weeks \nstore            artifactory.p:main-7e879c6 "/bin/thanos store -…" store 3 weeks ago Up 3 weeks \nthanos           artifactory.p:main-7e879c6 "/bin/thanos sidecar…" thanos 3 weeks ago Up 3 weeks 0.0.0.0:10901-10902->10901-10902/tcp,...\n\n\n\n\n\n\nThonos (sidecar):Description: uploads uncompacted prometheus chunks and implements thanos query APIMetrics: Thanos / Sidecar - Dashboards - Grafana (COMPANY.com)Thanos rule:Description: alternative place to calculate prometheus rulesMetrics: Thanos / Rule - Dashboards - Grafana (COMPANY.com)currently not used Thanos store:Description: implements Thanos query API by providing metrics from S3  Metrics: Thanos / Store - Dashboards - Grafana (COMPANY.com)Thanos bucket_web:Description: visualize metrics chunks on S3, allow to manage metrics chunks on S3 Metrics: https://mdm-monitoring.COMPANY.com/thanos-bucket-web/blocksThanos query_frontend:Description: cache layer implementing thanos query API Metrics: Thanos / Query Frontend - Dashboards - Grafana (COMPANY.com)Thanos query:Description: provides prometheus datasource api for grafana  Metrics: Thanos / Query - Dashboards - Grafana (COMPANY.com)Thanos compactorDescription: compacts data on S3 Metrics: Thanos / Compact - Dashboards - Grafana (COMPANY.com)Thanos oveview dashbord: Thanos / Overview - Dashboards - Grafana (COMPANY.com) \n\n\n\n\n\n\n\n\n\nGeneral troubleshooting: Every troubleshooting starts with analyzing logs from component which is mentioned in alert. Thanos components logs always give clear information about the problem:Typical procedure:Check alertsCheck status of components with command: docker-compose ps Check component log if it is crashlooping: with command: docker-compose logs <name_of_component>Alerts rules:Below links to prometheus rules that can generate alerts: thanos-sidecarthanos-compactthanos-component-absentthanos-querythanos-ruleKnows issues: Thanos sidecar permission deniedAlart: after 24H ThanosCompactHaltedDescription: thanos can't read shared folder with PrometheusSolution:Check thanos logs: docker-compose logs thanosconfirm issue "permission denied" accessing files Restart thanos with: docker-compose restart thanosCompactor haltedAlart: ThanosCompactHalted.Logs (docker-compose logs compactor)\ncompactor | ts=2024-03-25T13:23:43.380462226Z caller=compact.go:491 level=error msg="critical error detected; halting" err="compaction: group 0@3028247278749986641: compact blocks [/data/compact/0@3028247278749986641/01HSK9YKWVEDZGE9MF4XGARS58 /data/compact/0@3028247278749986641/01HSKBNHNJ9B1PC0NAYR5F67SJ /data/compact/0@3028247278749986641/01HSKDCFFEC9SZM5N5PTHK3TYM /data/compact/0@3028247278749986641/01HSKF3D9E0H1B4ZMAJ1YHKM1A]: populate block: chunk iter: cannot populate chunk 8 from block 01HSKDCFFEC9SZM5N5PTHK3TYM: segment index 0 out of range"\nDescription: Chunk uploaded to S3 is brokenSolution:Go to https://mdm-monitoring.COMPANY.com/thanos-bucket-web/blocksSearch for block 01HSKF3D9E0H1B4ZMAJ1YHKM1AClick on blockClick on "Mark Deletion"Restart compactor with: docker-compose restart compactor Verify if metric thanos_compact_halted returned to 0 Grafana -> thanos_compact_halted  Expired S3 keysAlart: maybe not tested: ThanosSidecarBucketOperationsFailedDescription: thanos can't access S3:Check Thanos bucket page whether you can see data chunks from S3: https://mdm-monitoring.COMPANY.com/thanos-bucket-web/blocksCheck components logs and confirm that Store, sidecar and bucket use old S3 keysRotate S3 Keys High memory usage by storeAlart: - Description: thanos store consumed over then 20% node memory Solution: No clear solution what was the root cause\n\n\n"
},
{
"title": "Snowflake",
"pageID": "218446612",
"pageLink": "/display/GMDM/Snowflake",
"content": ""
},
{
"title": "Dynamic Views Backwards Compatibility Error SOP",
"pageID": "322555521",
"pageLink": "/display/GMDM/Dynamic+Views+Backwards+Compatibility+Error+SOP",
"content": "For the process documentation please visit the following page:Snowflake: Backwards compatibilityThere are two artifacts that can be created for this process and will be delivered to the HUB-DL:breaking-changes.info - this file is created when an attribute changes its type from a lov to a non-lov value or vice-versa. Lov attributes have the *_LKP suffix in the column names for dynamic views therefore in this scenario there will be an additional column created and the data will be transferred to it. Bot columns will still be present in Snowflake. There is no action needed from the HUB end.breaking-changes.error - this file is only created when an existing column is converted into a nested value (is a parent value for multiple other attributes). Each nested value has a separate dynamic view that contains all of its attributes. The changes in this file are omitted in the snowflake refresh. When that kind of change will be discovered HUB will send information to Change Management and Delottie team to manage that case. "
},
{
"title": "How to Gather Detailed Logs from Snowflake Connector",
"pageID": "234979546",
"pageLink": "/display/GMDM/How+to+Gather+Detailed+Logs+from+Snowflake+Connector",
"content": "How To change the Kafka Consumer parameters in Snowflake Kafka Conenctor:add do docker-compose.yml:        environment:          - "CONNECT_MAX_POLL_RECORDS=50"          - "CONNECT_MAX_POLL_INTERVAL_MS=900000"    recreate container.How To enable JDBC TRACE on Snowflake Kafka Connector:    JDBC TRACE LOGS are in the TMP directory:    https://github.com/snowflakedb/snowflake-kafka-connector/pull/201/commits/650b92cfa362217ca4dfdf2c6768026e862a9b45    add         environment:          - "JDBC_TRACE=true"     additionally you can enable trace on whole connector:      - "CONNECT_LOG4J_LOGGERS=org.apache.kafka.connect=TRACE"      more details here:            https://docs.confluent.io/platform/current/connect/logging.html#connect-logging-docker            https://docs.confluent.io/platform/current/connect/logging.html    mount volume:       volumes:          - "/app/kafka-connect/prod/logs:/tmp:Z"    recreate container.        LOGS are in the:        amraelp00007848:mdmuspr:[05:59 AM]:/app/kafka-connect/prod/logs> pwd        /app/kafka-connect/prod/logs/snowflake_jdbc0.log.0            Also gather the logs from the Container stdout:        docker logs prod_kafka-connect-snowflake >& prod_kafka-connect-snowflake_after_restart_24032022_jdbc_trace.log   Additional details about DEBUG with snowflake debug:https://docs.confluent.io/platform/current/connect/logging.html#check-log-levelsYou can enable the DEBUG logs by editing the "connect" logfile. (it is different to the JDBC trace setting we used before)This is the link to our doc explaining the log enabling: ttps://docs.snowflake.com/en/user-guide/kafka-connector-ts.html#reporting-issuesIn more details, on the confluent documentation:https://docs.confluent.io/platform/current/connect/logging.html#using-the-kconnect-apiIt is also possible to use an API call: curl -s -X PUT -H "Content-Type:application/json" \\                        http://localhost:8083/admin/loggers/com.snowflake.kafka.connector \\-d '{"level": "DEBUG"}' | jq '.'Share with Snowflake support.     "
},
{
"title": "How to Refresh LOV_DATA in Lookup Values Processing",
"pageID": "218446615",
"pageLink": "/display/GMDM/How+to+Refresh+LOV_DATA+in+Lookup+Values+Processing",
"content": "Log in to proper Snowflake instance (credentials are stored in ansible repository):NPROD:EMEA (EU) - https://emeadev01.eu-west-1.privatelink.snowflakecomputing.comAMER (US) - https://amerdev01.us-east-1.privatelink.snowflakecomputing.comPROD: EMEA (GBL) - https://emeaprod01.eu-west-1.privatelink.snowflakecomputing.com AMER (US) - https://amerprod01.us-east-1.privatelink.snowflakecomputing.comSet proper role, warehouse and database:example (EU): DB NameCOMM_GBL_MDM_DMART_PRODDefault warehouse nameCOMM_MDM_DMART_WHDevOps role nameCOMM_PROD_MDM_DMART_DEVOPS_ROLERun commands in the following order:COPY INTO landing.lov_data from @landing.LOV_DATA_STG pattern='.*.json';call customer.refresh_lov();call customer.materialize_view_full_refresh('M', 'CUSTOMER','CODES');call customer.materialize_view_full_refresh('M', 'CUSTOMER','CODE_SOURCE_MAPPINGS');call customer.materialize_view_full_refresh('M', 'CUSTOMER','CODE_TRANSLATIONS');REMOVE @landing.LOV_DATA_STG pattern='.*.json'; "
},
{
"title": "Issue: Cannot Execute Task, EXECUTE TASK Privilege Must Be Granted to Owner Role",
"pageID": "196884458",
"pageLink": "/display/GMDM/Issue%3A+Cannot+Execute+Task%2C+EXECUTE+TASK+Privilege+Must+Be+Granted+to+Owner+Role",
"content": "Environment details:SF: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.comdb: COMM_EU_MDM_DMART_DEVschema: CUSTOMERrole: COMM_GBL_MDM_DMART_DEV_DEVOPS_ROLEIssue:The command is working fine:\nCREATE OR REPLACE TASK customer.refresh_customer_sl_eu_legacy_views\n WAREHOUSE = COMM_MDM_DMART_WH\n AFTER customer.refresh_customer_consolidated_views\nAS\nCALL customer.refresh_sl_views('COMM_EU_MDM_DMART_DEV_DB','CUSTOMER','COMM_GBL_MDM_DMART_DEV_DB','CUSTOMER_SL','%','I','M', false);\nALTER TASK customer.refresh_customer_sl_eu_legacy_views resume;\nThe command that is causing the issue:\nALTER TASK customer.refresh_customer_consolidated_views resume;\n\nSQL Error [91089] [23001]: Cannot execute task , EXECUTE TASK privilege must be granted to owner role\nSolution:http://btondemand.COMPANY.com/getsupportChoose SnowflakeIssue:Describe your issue - Cannot execute task, EXECUTE TASK privilege must be granted to owner rolePlease provide a detailed description:Hi Team,We are facing the following issue:SQL Error [91089] [23001]: Cannot execute task, EXECUTE TASK privilege must be granted to owner roleduring the execution of the following command:ALTER TASK customer.refresh_customer_consolidated_views resume;Environment details:HOST: https://emeadev01.eu-west-1.privatelink.snowflakecomputing.comDB: COMM_EU_MDM_DMART_DEVSCHEMA: CUSTOMERROLE: COMM_GBL_MDM_DMART_DEV_DEVOPS_ROLECould you please fix this issue in DEV/QA/STAGE and additionally on PROD:HOST: https://emeaprod01.eu-west-1.privatelink.snowflakecomputing.comPlease let me know if you need any other details.Created ticket for reference: - http://digitalondemand.COMPANY.com/My-Tickets/Ticket-Details?ticket=RF3372664 "
},
{
"title": "PTE: Add Country",
"pageID": "302686106",
"pageLink": "/display/GMDM/PTE%3A+Add+Country",
"content": "There are two files in the Snowflake Bitbucket repo that are used in the deployment for PTE:src/sql/global/pte_sl/tables/driven_tables.sqlsrc/sql/global/pte_sl/views/report_views.sqldriven_tables.sqlThis file contains the definitions of supporting tables used for the calculation of the PTE_REPORT view.DRIVEN_TABLE2_STATIC contains the list of identifiers per country and the column placement in the pte_report view. There can be a maximum of five identifiers per country and they should be provided by the PTE team. If there are no identifiers added for a country in the table the list of identifiers will be calculated "dynamically" based on the number of HCPs having the identifier.Column nameDescriptionISO_CODEISO2 code of the country ie. 'TR', 'FR', 'PL' etc.CANONICAL_CODERDM code that will appear in PTE_REPORT as IDENTIFIER_CODELANG_DESCRDM code description that will appear in PTE_REPORT as IDENTIFIER_CODE_DESCCODE_IDTYPE_LKP value used to connect to the identifiers table to extract the value.MODEL'p' or 'i' showing whether the codes for the country should be taken from the IQVIA ('i') or COMPANY ('p') data model.ORDER_IDA number from 1 to 5. Showing the placement of the code among identifiers. Code from 1 will be mapped to IDENTIFIER1_CODE etc.report_views.sqlDRIVEN_TABLE1 is a view that derives the basic information for the country from the COUNTRY_CONFIG table. The country ISO2 code has to be added into the WHERE clause depending on whether the country should have data from the IQVIA data model (the first part of the query) or from the COMPANY data model (after the UNION)\n \n DRIVEN_TABLE1 Expand source\n \n \n CREATE OR REPLACE VIEW PTE_SL."DRIVEN_TABLE1" AS(\nSELECT\n ISO_CODE,\n NAME,\n LABEL,\n RELTIO_TENANT,\n HUB_TENANT,\n SF_INSTANCE,\n SF_TENANTDATABASE,\n CUSTOMERSL_PREFIX\nFROM CUSTOMER.COUNTRY_CONFIG \nWHERE ISO_CODE in ('SK', 'PH', 'CL', 'CO', 'AR', 'MX')\nAND CUSTOMERSL_PREFIX = 'i_'\nUNION ALL\nSELECT\n ISO_CODE,\n NAME,\n LABEL,\n RELTIO_TENANT,\n HUB_TENANT,\n SF_INSTANCE,\n SF_TENANTDATABASE,\n CUSTOMERSL_PREFIX\nFROM CUSTOMER.COUNTRY_CONFIG\nWHERE ISO_CODE in ('AD', 'BL', 'BR', 'FR', 'GF', 'GP', 'MC', 'MC', 'MF', 'MQ', 'MU', 'NC', 'PF', 'PM', 'RE', 'TF', 'WF', 'YT')\nAND CUSTOMERSL_PREFIX = 'p_'\n);\n \nPTE_REPORT this is the view from which the clients take their data. Unfortunately the data required varies from country to country and also is some cases between nprod and prod due to data availability.GO_STATUS. By default for the IQVIA data model the values for GO_STATUS are YES/NO and for the COMPANY data model they're Y/N if there's an exception you have to manually add the country to the case in the view.\n \n GO_STATUS Expand source\n \n \n CAST(CASE\n WHEN HCP.GO_STATUS_LKP = 'LKUP_GOVOFF_GOSTATUS:GO' AND HCP.COUNTRY IN ('CO', 'CL', 'AR', 'MX') THEN 'Y'\n WHEN HCP.GO_STATUS_LKP = 'LKUP_GOVOFF_GOSTATUS:NGO' AND HCP.COUNTRY IN ('CO', 'CL', 'AR', 'MX') THEN 'N'\n WHEN HCP.GO_STATUS_LKP = 'LKUP_GOVOFF_GOSTATUS:GO' THEN 'YES'\n WHEN HCP.GO_STATUS_LKP = 'LKUP_GOVOFF_GOSTATUS:NGO' THEN 'NO'\n\tWHEN HCP.COUNTRY IN ('CO', 'CL', 'AR', 'MX') THEN 'N'\n ELSE 'NO'\nEND AS VARCHAR(200)) AS "GO_STATUS",\n \n"
},
{
"title": "QC",
"pageID": "234712311",
"pageLink": "/display/GMDM/QC",
"content": "Snowflake QC Check data is located in the CUSTOMER.QUALITY_CONTROL table.Duplicated COMPANY_GLOBAL_CUSTOMER_IDsql:SELECT COMPANY_global_customer_id, COUNT(1)FROM customer.entitiesWHERE COMPANY_global_customer_id is not nullAND last_event_type not like '%LOST_MERGE%'AND last_event_type not like '%REMOVED%'GROUP BY COMPANY_global_customer_idHAVING COUNT(1) > 1Description:COMPANY Global Customer ID should be unique for every entity in Reltio. In case of any duplicates you have to check if it's a Snowflake data refresh issue (data is OK in Reltio not in Snowflake), or something is wrong with the flow (check if the id's are duplicated in COMPANYIdRegistry in Mongo). Merges with object datasql:SELECT ENTITY_URI FROM CUSTOMER.ENTITIESWHERE LAST_EVENT_TYPE IN ('HCP_LOST_MERGE', 'HCO_LOST_MERGE', 'MCO_LOST_MERGE')AND OBJECT IS NOT NULLDescription:All entities in the *Lost_Merge status should have null values in the object column. If that's not the case they have to be cleared manually either by re-sending the specified record to Snowflake or by manually setting the object field for them as null. Active crosswalks assigned to more than one different entitysql:SELECT CROSSWALK_URIFROM CUSTOMER.M_ENTITY_CROSSWALKSWHERE ACTIVE = TRUEAND ACTIVE_CROSSWALK = TRUEGROUP BY CROSSWALK_URIHAVING COUNT(ENTITY_URI) > 1Description:A crosswalk should be active for only one entity_uri. If that's not the case then either the entities should be merged (contact: DLER-COMPANY-MDM-Support <COMPANY-MDM-Support@iqvia.com>) or they were merged but the lost_merge event wasn't delivered to snowflake / mdm_hub.Duplicated entities in materialized viewssql:SELECT ENTITY_URI, 'HCO' TYPE, COUNT(1)FROM CUSTOMER.M_HCOGROUP BY ENTITY_URIHAVING COUNT(1) > 1UNION ALLSELECT ENTITY_URI, 'HCP' TYPE, COUNT(1)FROM CUSTOMER.M_HCPGROUP BY ENTITY_URIHAVING COUNT(1) > 1Description:There are duplicated records in materialized tables. Investigate what caused the duplicates and run the full materialization procedure to fix it.Entities with the same global id and parent global idsql:SELECT ENTITY_URI, COMPANY_GLOBAL_CUSTOMER_ID, PARENT_COMPANY_GLOBAL_CUSTOMER_IDFROM CUSTOMER.ENTITIESWHERE COMPANY_GLOBAL_CUSTOMER_ID = PARENT_COMPANY_GLOBAL_CUSTOMER_IDAND COMPANY_GLOBAL_CUSTOMER_ID IS NOT NULLDescription:Check if this is the case in the hub. If not re-send the data into snowflake if yes than contact the support team.Missing ID's for specializations:sql:SELECT ENTITY_URIFROM CUSTOMER.M_SPECIALITIESWHERE SPECIALITIES_URI IS NULLDescription:Review the affected entities. If their missing an id review them with the hub. Make sure they're active in Reltio and Hub. You might have to reload it in snowflake if it's not updated."
},
{
"title": "Snowflake - Prometheus Alerts",
"pageID": "401026870",
"pageLink": "/display/GMDM/Snowflake+-+Prometheus+Alerts",
"content": "SNOWFLAKE TASK FAILEDDescription: This alert means that one of the regularly scheduled snowflake tasks have failed. To fix this you have to find the task that was failed in Snowflake, check the reason, and fix it. Snowflake task dag's have an auto suspend function after ten conscutive failed runs, if the issue isn't resolved at the time you'll need to manually restart the root task.Queries:Idnetify failed tasks\nSELECT *\nFROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY(RESULT_LIMIT=>5000, ERROR_ONLY=>TRUE))\n;\nUse the ERROR_CODE and ERROR_MESSAGE columns to find out the information needed to determine the cause of the error.After determining and fixing the cause of the issue you can manually run all the queries that are left in the task tree. To get them you can use the following code:\nSELECT *\nFROM TABLE(INFORMATION_SCHEMA.TASK_DEPENDENTS('<task_name>'))\n;\nRemember that if a schema isn't selected for the session you need submit it with the task name.You can also use the execute task query with the RETRY LAST option to restart the flow. This will only work if a new run wasn't started yet and you have to run it on the root task not the task that failed.\nEXECUTE TASK <root_task_name> RETRY LAST;\nSNOWFLAKE TASK FAILED 603Description: This alert means that one of the regularly scheduled snowflake tasks have failed. To fix this you have to find the task that was failed in Snowflake, check the reason, and fix it. Snowflake task dag's have an auto suspend function after ten conscutive failed runs, if the issue isn't resolved at the time you'll need to manually restart the root task.Queries:Idnetify failed tasks\nSELECT *\nFROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY(RESULT_LIMIT=>5000, ERROR_ONLY=>TRUE))\n;\nYou can manually run all the queries that are left in the task tree. To get them you can use the following code:\nSELECT *\nFROM TABLE(INFORMATION_SCHEMA.TASK_DEPENDENTS('<task_name>'))\n;\nRemember that if a schema isn't selected for the session you need submit it with the task name.You can also use the execute task query with the RETRY LAST option to restart the flow. This will only work if a new run wasn't started yet and you have to run it on the root task not the task that failed.\nEXECUTE TASK <root_task_name> RETRY LAST;\nSNOWFLAKE TASK NOT STARTED 24hDescription: A Snowflake scheduled task hasn't run in the last day. You need to check if the alert is factually correct and solve any issues that are stopping the task from running. Please note that on production the materialization is scheduled every two hours, so if a materialization task isn't run for 24h that means that we missied twelve materialization cycles of data, hence it's important to get it fixed as soon as possible.Queries:Check when the task was last run\nSELECT *\nFROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY(RESULT_LIMIT=>5000))\nWHERE 1=1\nAND DATABASE_NAME ='<database_name>'\nAND NAME = '<task_name>'\nORDER BY QUERY_START_TIME DESC\n;\nIf the task is running succesfully the issue might be with prometheus data scraping. Check the following dashboard to see when the data was last succesfully scraped:Snowflake Tasks - DashboardIf the task wasn't run in the last 24h. It might be suspended. Verify it using the command:\nSHOW TASKS;\nThe column STATE will tell you if the task is suspended or started, and the LAST_SUSPENDED_REASON columns will tell you what was the reason of the last suspension. If it's SUSPENDED_DUE_TO_ERRORS you need to get the list of all of the dependent tasks and find which one of the failed (reminder: the root task gets suspended if any of the child tasks faila ten times in a row). To find out the failed task and the dependants of the suspended task you can use the queries from the alert SNOWFLAKE TASK FAILED.To restart a suspended task run the query:\nALTER TASK <schema_name>.<task_name> resume;\nSNOWFLAKE DUPLICATED COMPANY GLOBAL CUSTOMER ID'SDescription: COMPANY Global Customer Id's are unique identifiers calculated by the Hub. In some cases of wrongly done unmerge events on Reltio side there might be entities with wrongly assigned hub-callback crosswalks, or there might be another reason that caused the duplicates. The ID's need to be unique so ti should be verified, fixed, and the data reloaded in a timely manner.Queries:Identify COMPANY global customer id's with duplicates:\nSELECT COMPANY_global_customer_id, COUNT(1)\nFROM customer.entities\nWHERE COMPANY_global_customer_id is not null\nAND last_event_type not like '%LOST_MERGE%'\nAND last_event_type not like '%REMOVED%'\nGROUP BY COMPANY_global_customer_id\nHAVING COUNT(1) > 1\n;\nVariant of the query that returns entity uri's for easier querying:\nSELECT ENTITY_URI\nFROM CUSTOMER.ENTITIES\nWHERE COMPANY_GLOBAL_CUSTOMER_ID IN (\n    SELECT COMPANY_global_customer_id\n    FROM customer.entities\n    WHERE COMPANY_global_customer_id is not null\n    AND last_event_type not like '%LOST_MERGE%'\n    AND last_event_type not like '%REMOVED%'\n    GROUP BY COMPANY_global_customer_id\n    HAVING COUNT(1) > 1\n)\n;\nCheck if the duplicates are reflected in MongoDB. If the data in Mongo doesn't have the duplicates use hub ui to resend the events to Snowflake.Check if Reltio contains the duplicated data if not reconcile the affected entities, if yes review the reason. If it's because of a Hub_Callback you might need to manually delete the crosswalk, and check COMPANYIDRegistry in Mongo, if it also contains duplicates that you need to delete it there also.SNOWFLAKE LAST ENTITY EVENT TIMEDescription: The alert informs of Snowflake production tenants where the last update was more than four hours ago. The refresh on production is every two hours and the traffic is high enough that there should be updates in every cycle.Queries:Check how many minutes ago was the last update in Snowflake\nSELECT DATEDIFF('MINUTE', (SELECT MAX(SF_UPDATE_TIME) FROM CUSTOMER.ENTITIES), (SELECT CURRENT_TIMESTAMP()));\nIf it's over four hours check the kafka snowflake topic if it has an active consumer and if the data is flowing correctly to the landing schema. Review any latest changes in Snowflake refresh to make sure that there's nothing impacting the tasks and they're all started. If the data in snowflake is OK than the issue might be with the data scrape.Snowflake Tasks - DashboardSNOWFLAKE MISSING COMPANY GLOBAL ID'S IN MATERIALIZED DATADescription: This alert informs us that there are entities in Snowflake that don't have a COMPANY Global Customer ID. This is a mandatory identifier and as such should be available for all event types (excluding DCR's). It's also used by down steram clients to identify records and in case the value is deleted from an entity it will be deleted in the down streams.Queries:Check the impact in the qc table:\nSELECT *\nFROM CUSTOMER.QC_COMPANY_ID\nORDER BY DATE DESC\n;\nGet the list of all entities that are missing the id's\nSELECT *\nFROM CUSTOMER.ENTITIES\nWHERE COMPANY_GLOBAL_CUSTOMER_ID IS NULL\nAND ENTITY_TYPE != 'DCR'\nAND COUNTRY != 'US'\nAND (SELECT CURRENT_DATABASE()) not ilike 'COMM_EU%'\n;\nCheck the data in Mongo, AKHQ, Reltio.Consider informing down stream cleints to stop ingestion of the data until the issue is fixedSNOWFLAKE GENERATED EVENTS WITHOUT COMPANY GLOBAL CUSTOMER ID'SDescription: This alert stops events without COMPANY Global Customer ID's from reaching the materialized data layer. It will add information about this occurences into a special table and delete those events before materialization.Queries:Check the list of impacted entity_uri's\nSELECT *\nFROM CUSTOMER.MISSING_COMPANY_ID\n;\nCheck for the reason of missing COMPANY Global Customer Id's similiarly to missing global id's in materialized data alaer.After finding and fixnig the reason of the issue use Hub UI to resend the profiles into Snowflake to make sure we have the correct data.Clear the missing COMPANY id table\nTRUNCATE TABLE CUSTOMER.MISSING_COMPANY_ID;\nSNOWFLAKE TOPIC NO CONSUMERDescription: The Kafka Connector from Mongo to Snowflake has data which isn't consumed.Queries:Check if the consumer is online you might have to restart it's pod to get it working again.SNOWFLAKE VIEW MATERIALIZATION FAILEDDescription: This alert informs you that one or more views have failed in their last materialization attempt. The alert checks the data from CUSTOMER.MATERIALZED_VIEW_LOG table for the last seven days and chooses the last materialization attempt based on the largest id.Queries:Query that the alert is based upon\nSELECT COUNT(VIEW_NAME) FAILED_MATERIALIZATION\nFROM (\n SELECT VIEW_NAME, MAX(ID) ID, SUCCESS, ERROR_MESSAGE, MATERIALIZED_OPTION, ROW_NUMBER() OVER (PARTITION BY VIEW_NAME ORDER BY ID DESC) AS RN\n FROM CUSTOMER.MATERIALIZED_VIEW_LOG\n GROUP BY VIEW_NAME, ERROR_MESSAGE, ID, SUCCESS, MATERIALIZED_OPTION\n HAVING DATEDIFF('days', MAX(START_TIME), (SELECT CURRENT_DATE())) < 7\n)\nWHERE RN = 1\nAND SUCCESS = 'FALSE';\nModified version that will show you the error message that Snowflake ended the materialization attempt. Those are standard SQL errors on which you have to find out the root cause and the resolution of the issue.\nSELECT VIEW_NAME, ERROR_MESSAGE\nFROM (\n    SELECT VIEW_NAME, MAX(ID) ID, SUCCESS, ERROR_MESSAGE, MATERIALIZED_OPTION, ROW_NUMBER() OVER (PARTITION BY VIEW_NAME ORDER BY ID DESC) AS RN\n    FROM CUSTOMER.MATERIALIZED_VIEW_LOG\n    GROUP BY VIEW_NAME, ERROR_MESSAGE, ID, SUCCESS, MATERIALIZED_OPTION\n    HAVING DATEDIFF('days', MAX(START_TIME),  (SELECT CURRENT_DATE())) < 7\n)\nWHERE RN = 1\nAND SUCCESS = 'FALSE';\nSNOWFLAKE MISSING DESC IN CODES VIEWDescription: This alert indicates that there are codes without descriptions in the CUSTOMER.M_CODES data table.Queries:Check the missing data:\nSELECT CODE_ID, DESC\nFROM CUSTOMER.M_CODES\nWHERE DESC IS NULL;\nCheck the Dynamic view to make sure it's not a materialization issue:\nSELECT CODE_ID, DESC\nFROM CUSTOMER.CODES\nWHERE DESC IS NULL;\nIf it's a materialization issue then rematerialize the table.\nCALL CUSTOMER.MATERIALIZE_VIEW_FULL_REFRESH('M', 'CUSTOMER', 'CODES');\nIf the data is missing in the dynamic view, check the code in RDM. If it has a source mapping from the source Reltio with the canonical value set to true, then it should have data in Snowflake. Check why it isn't flowing. If there is no such entry notify COMPANY team."
},
{
"title": "Release",
"pageID": "386809112",
"pageLink": "/display/GMDM/Release",
"content": "Release history:4.1.24 [TEMPLATE - draft]\n4.1.24 [TEMPLATE - example]\n4.1.28\n4.1.29\n4.10.0\n4.11.0\n4.11.1\n4.12.0\n4.12.1\n4.12.2\n4.14.0\n4.14.1\n4.15.0\n4.16.0\n4.16.1\n4.17.0\n4.18.0\n4.18.1\n4.19.0\n4.21.0\n4.22.0\n4.23.0\n4.25.0\n4.28.0\n4.3.0\n4.30.0\n4.31.0\n4.32.0\n4.33.0\n4.34.0\n4.35.0\n4.38.0\n4.39.0\n4.40.0\n4.41.0\n4.42.0\n4.43.0\n4.44.0\n4.45.0\n4.46.0\n4.47.0\n4.47.1\n4.48.0\n4.49.0\n4.50.0\n4.51.0\n4.54.0\n4.54.1\n4.55.0\n4.56.0\n4.58.0\n4.59.0\n4.6.0\n4.60.0\n4.62.0\n4.63.0\n4.9.0\nSnowflake Release\nRelease process description (TBD):Text:Diagram: How branches work, differences between release and FIX deployemend(TBD):Text:Diagram:Release rules:Always do PR review.Do not deploy unencrypted files.Release versioning: normal path 4.x, FIX version 4.10.xTBDTBDRelease calendar:TBD"
},
{
"title": "Snowflake Release",
"pageID": "430080179",
"pageLink": "/display/GMDM/Snowflake+Release",
"content": ""
},
{
"title": "Current Release",
"pageID": "438309059",
"pageLink": "/display/GMDM/Current+Release",
"content": "Release report:Release:2.2.0Release date:STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Grzegorz SzczęsnyPlanned GO-LIVE:wed Jul 03Jira linkCategoryDescriptionDeveloped ByDevelopment FinishedTested By Test Scenarios / ResultsTesting FinishedAdditional Notes\n MR-9001\n -\n Getting issue details...\n STATUS\n \n MR-8942\n -\n Getting issue details...\n STATUS\n Feature ChangeUpdate the data mart with code changes needed for Onekey and DLUP data.SZCZEG0102.07.2024SARMID03Done validating below:✅Onekey Data Mapping.✅ DLUP Data Mapping.03.07.2024\n MR-9056\n -\n Getting issue details...\n STATUS\n Feature ChangeUpdate the Country Table for Transparency_SL with new data.SZCZEG0102.07.2024SARMID03✅New data passed the checking.03.07.2024\n MR-8988\n -\n Getting issue details...\n STATUS\n ChangeImproved the MATERIALIZE_VIEW_INCREMENTAL_REFRESH procedure to cover 5 options, that were previously covered by 5 separate procedures and replaced their use with the new oneHARAKR02.07.2024PROD deployment report:PROD deployment date:Wed Jun 26 12:27:48 UTC 2024Deployed by:Grzegorz SzczęsnyENV:LinkStatusDetailsAMERSUCCESSAPACSUCCESSEMEASUCCESSGBL(EX-US)SUCCESSGBLUSSUCCESSGLOBALSUCCESS"
},
{
"title": "2.1.0",
"pageID": "430080184",
"pageLink": "/display/GMDM/2.1.0",
"content": "Release report:Release:2.1.0Release date:Wed Jun 26 12:27:48 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Grzegorz SzczęsnyPlanned GO-LIVE:wed Jun 19Jira linkCategoryDescriptionDeveloped ByDevelopment FinishedTested By Test Scenarios / ResultsTesting FinishedAdditional Notes\n MR-8919\n -\n Getting issue details...\n STATUS\n New FeaturePOC - The point of this ticket is to check if calculating a delta based on the SF_UPDATE_TIME from the materialized ENTITY_UPDATE_DATES table will be more efficient than using the stream. If this results in better performance than we're going to calculate deltas on our base tables dropping the streams.SZCZEG0128.05.2024SZCZEG01Verified the change on times and the data quality by running the procedures simultanously on EMEA STAGE for a period of timeold:new:\n MR-8862\n -\n Getting issue details...\n STATUS\n New FeatureDue to a change done in RDM we lost some descriptions for certain codes. It's important that we have the visibility for such issues in the future, therefore the need for this alert.SZCZEG0129.05.2024-New alert in Prometheus no need for additional testing--\n MR-8969\n -\n Getting issue details...\n STATUS\n ChangeAdjusted TRANSPARENCY_SL views to filter based on COUNTRY code (COMPANY model vs iquvia)HARAKR13.06.2024\n MR-9003\n -\n Getting issue details...\n STATUS\n ChangeUdate TRANSPARENCY_SL schema to Secure Views instead of views, due to the need to have the data from EMEA PROD available in AMER lower envs.SZCZEG0121.06.2024-Checked the view type on PROD--\n MR-8986\n -\n Getting issue details...\n STATUS\n ChangeChenge the way incremental code updates treat hard deleted lov's.SZCZEG0118.06.2024SZCZEG01\n MR-8740\n -\n Getting issue details...\n STATUS\n ChangeSuspend the WAREHOUSE_SUSPEND task.SZCZEG0118.04.2024-Pushed diretly to PROD--\n MR-8701\n -\n Getting issue details...\n STATUS\n New FeatureAdd new views in the PT&E schema for Saudi Arabia HCO / IDENTIFIERSSZCZEG0118.04.2024-Checked the views availability and record counts.--\n MR-8712\n -\n Getting issue details...\n STATUS\n BugfixFix a case where column order changes and it causes global views to not update properly.SZCZEG0118.04.2024SZCZEG01Rerun the case that cause the issue--\n MR-8827\n -\n Getting issue details...\n STATUS\n ChangeAdd email column to PT&E EU/APAC reportsSZCZEG0122.05.2024SZCZEG01Checked the column availability--\n MR-8863\n -\n Getting issue details...\n STATUS\n ChangeAdd a case for code materialization where there are more than one descriptions from the source Reltio but not all of them are CanonicalValues.SZCZEG0122.05.2024SZCZEG01Checked with the existing misisng descriptions.--\n MR-7038\n -\n Getting issue details...\n STATUS\n New FeatureAdd enchanced logging for manually called procedures.SZCZEG0122.05.2024SZCZEG01---\n MR-8896\n -\n Getting issue details...\n STATUS\n ChangeRemove DE from PTE_REPORT_EU, change values "Without Title", "Unknown", and "Unspecified" to null.SZCZEG0122.05.2024SZCZEG01---\n MR-8916\n -\n Getting issue details...\n STATUS\n ChangeRemove "Unknown" Country Codes from missing COMPANY global customer id's.SZCZEG0128.05.2024SZCZEG01---\n MR-8994\n -\n Getting issue details...\n STATUS\n ChangeUpdata column names for PTE_REPORT_SA.SZCZEG0118.06.2024SZCZEG01---\n MR-8992\n -\n Getting issue details...\n STATUS\n ChangeAdd missing columns to the Transparency_SL reports (MVP1 review).SZCZEG0118.06.2024SZCZEG01---\n MR-8980\n -\n Getting issue details...\n STATUS\n ChangeAdd US data into the Global DataMart TRANSPARENCY_SL.SZCZEG0118.06.2024SZCZEG01---\n MR-8977\n -\n Getting issue details...\n STATUS\n ChangeAdd hard coded columns to the TRANSPARENCY_SL data mart.SZCZEG0118.06.2024SZCZEG01---\n MR-8844\n -\n Getting issue details...\n STATUS\n New FeatureCreate Initial Data Mart for the TRANSPARENCY_SL project.SZCZEG0118.06.2024SZCZEG01---\n MR-9016\n -\n Getting issue details...\n STATUS\n BugfixFix on MR-8986. The procedure was launched in the landing schema but it tried to use a function that is only available in customer. Not finding the function in the current schema it returned an errorSZCZEG0125.06.2024SZCZEG01---\n MR-8991\n -\n Getting issue details...\n STATUS\n New FeatureChange refreh entities to use a calculated delta instead of strems. Followup to POC MR-8919.SZCZEG0118.06.2024SZCZEG01---PROD deployment report:PROD deployment date:Wed Jun 26 12:27:48 UTC 2024Deployed by:Grzegorz SzczęsnyENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm_snowflake_deploy/view/AMER/job/deploy_mdmhub_snowflake__amer_prod/165/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm_snowflake_deploy/view/APAC/job/deploy_mdmhub_snowflake__apac_prod/135/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm_snowflake_deploy/view/EMEA/job/deploy_mdmhub_snowflake__emea_prod/218/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm_snowflake_deploy/view/GBL/job/deploy_mdmhub_snowflake__gbl_prod/238/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm_snowflake_deploy/view/GBLUS/job/deploy_mdmhub_snowflake__gblus_prod/229/SUCCESSGLOBALhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm_snowflake_deploy/view/GLOBAL/job/deploy_mdmhub_snowflake__global_prod/57/SUCCESSCHANGELOG_2_1_0.md"
},
{
"title": "4.1.24 [TEMPLATE - draft]",
"pageID": "386815558",
"pageLink": "/pages/viewpage.action?pageId=386815558",
"content": "Release report:Release:4.1.24Release date:Tue Jan 16 21:08:10 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:TODOPlanned GO-LIVE:Tue Jan 30 (in 2 weeks)StageLinkStatusComments (images 600px)Build:TODOSUCCESS CHANGELOG:TODOUnit tests:TODOSUCCESSTODOIntegration tests:Execution date: TODOExecuted by: TODOAMERTODO[84] SUCCESS[0] FAILED[0] REPEATEDTODOAPACTODO[89] SUCCESS[0] FAILED[0] REPEATEDTODOEMEATODO[89] SUCCESS[0] FAILED[0] REPEATEDTODOGBL(EX-US)TODO[72] SUCCESS[0] FAILED[0] REPEATEDTODOGBLUSTODO[74] SUCCESS[0] FAILED[0] REPEATEDTODOTests ready and approved:approved by: TODORelease ready and approved:approved by: TODODEV and QA tests results:DEV and QA deployment date:TODO Wed Jan 17 09:35:31 UTC 2024Deployment approved:approved by: TODODeployed by:TODOENV:LinkStatusDetailsAMERTODOSUCCESSAPACTODOSUCCESSEMEATODOSUCCESSGBL(EX-US)TODOSUCCESSGBLUSTODOSUCCESS STAGE deployment details:STAGE deployment date:TODO Wed Jan 17 09:35:31 UTC 2024Deployment approved:approved by: TODODeployed by:TODOENV:LinkStatusDetailsAMERTODOSUCCESSAPACTODOSUCCESSEMEATODOSUCCESSGBL(EX-US)TODOSUCCESSGBLUSTODOSUCCESS STAGE test phase details:Verification dateVerification byDashboardHintsStatusDetailsMDMHUB / MDMHUB Component errorsIncreased number of alerts → there's certainly something wrongMDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issueMDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)General / Snowflake QC TrendsQuick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issueKubernetes / K8s Cluster Usage StatisticsGood for PROD environments since NPROD is to prone to project specific loadsKubernetes / Pod MonitoringComponent specific analysis, especially good for the ones updated within latest release (check news fragments)General / kubernetes-persistent-volumes Storage trend over time General / Alerts Statistics Increase after release → potential issue General / SSL Certificates and Endpoint AvailabilityLower widget, multiple stacked endpoints at the same time for a long periodPROD deployment report:PROD deployment date:TODO Wed Jan 17 09:35:31 UTC 2024Deployment approved:approved by: TODODeployed by:TODOENV:LinkStatusDetailsAMERTODOSUCCESSAPACTODOSUCCESSEMEATODOSUCCESSGBL(EX-US)TODOSUCCESSGBLUSTODOSUCCESSPROD deploy hypercare details:Verification dateVerification byDashboardHintsStatusDetailsMDMHUB / MDMHUB Component errorsIncreased number of alerts → there's certainly something wrongMDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issueMDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)General / Snowflake QC TrendsQuick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issueKubernetes / K8s Cluster Usage StatisticsGood for PROD environments since NPROD is to prone to project specific loadsKubernetes / Pod MonitoringComponent specific analysis, especially good for the ones updated within latest release (check news fragments)General / kubernetes-persistent-volumes Storage trend over time General / Alerts Statistics Increase after release → potential issue General / SSL Certificates and Endpoint AvailabilityLower widget, multiple stacked endpoints at the same time for a long period"
},
{
"title": "4.1.24 [TEMPLATE - example]",
"pageID": "386809114",
"pageLink": "/pages/viewpage.action?pageId=386809114",
"content": "Release report:Release:4.1.24Tue Jan 16 21:08:10 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Mikołaj MorawskiTue Jan 30 (in 2 weeks)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/467/ SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/387d6b51ebf7ade55692d80388d81e3c1e59117d Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/467/testReport/ SUCCESSIntegration tests:Execution date: Wed Jan 24 18:01:08 UTC 2024Executed by: Mikołaj MorawskiAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/372/testReport/[84] SUCCESS[0] FAILED[0] REPEATEDAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/314/testReport/[89] SUCCESS[0] FAILED[0] REPEATEDEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/466/testReport/[88] SUCCESS[0] FAILED[0] REPEATEDGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/384/testReport/[73] SUCCESS[0] FAILED[1] REPEATEDfailed tests - DerivedHcpAddressesTestCase.derivedHCPAddressesTest during run on Reltio there were multiple events and test got blocedTest was repeated manually and passed with success <screenshot from local execution>GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/321/testReport/[74] SUCCESS[0] FAILED[0] REPEATEDTests ready and approved:approved by: Mikołaj Morawski  TODO - add https://marketplace.atlassian.com/apps/1217404/digital-signature?hosting=server&tab=overviewRelease ready and approved:approved by: Mikołaj Morawski STAGE deployment details:STAGE deployment date:Wed Jan 17 09:35:31 UTC 2024Deployment approved:approved by: Mikołaj Morawski Deployed by:Mikołaj MorawskiENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/331/SUCCESScommentsAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/145/SUCCESScommentsEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/365/SUCCESScommentsGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/211/SUCCESScommentsGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/234/SUCCESS commentsSTAGE test phase details:Test Test description ResponsibleStatusAlerts verificationTo check if any of alerts in STG environments is a prod deployment release stopper. e.g. Latuch, Lukasz e.g. SUCCESSSnowFlake checkTo check if there are any QC checks or tasks failed that can happend on prod environments. Data Quality GatewayTo check if there are any broken events. Environment checkTo check if there are any issues on STG environment that can be a PROD release stopperTBDTBDPROD deployment report:PROD deployment date:Wed Jan 17 09:35:31 UTC 2024Deployment approved:approved by: Mikołaj Morawski Deployed by:Mikołaj MorawskiENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/255/ SUCCESScommentsAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/226/SUCCESScommentsEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/270/SUCCESScommentsGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/195/SUCCESScommentsGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/229/SUCCESScomments"
},
{
"title": "4.1.28",
"pageID": "386815544",
"pageLink": "/display/GMDM/4.1.28",
"content": "Release report:Release:4.1.28Release date:Thu Feb 08 10:10:38 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Rafał KućPlanned GO-LIVE:Thu Feb 29StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/470/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/966ebe3374d1de8d89764bbf5fd4e39e638a5723#CHANGELOG.mdhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/39953783022e8b06c49af2e872b7cf66f2a8b26bUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/470/testReport/SUCCESSIntegration tests:Execution date: Tue Feb 13 18:00:57 UTC 2024Executed by: Mikołaj MorawskiAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/391/testReport/[84] SUCCESS[0] FAILED[1] REPEATEDone failed test - com.COMPANY.mdm.tests.events.COMPANYGlobalCustomerIdTest.testrepeated from local PC one more time by Mikołaj Morawskiduring run on Reltio there were multiple events and test got blockedTest was repeated manually and passed with successAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/330/testReport/[89] SUCCESS[0] FAILED[1] REPEATEDone failed test - com.COMPANY.mdm.tests.events.COMPANYGlobalCustomerIdTest.testrepeated from local PC one more time by Mikołaj Morawskiduring run on Reltio there were multiple events and test got blockedTest was repeated manually and passed with successEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/485/testReport/[88] SUCCESS[0] FAILED[1] REPEATEDone failed test -  com.COMPANY.mdm.tests.dcr2.DCR2ServiceTest.shouldCreateHCPOneKeyRedirectToReltiorepeated from local PC one more time by Mikołaj Morawskiduring run on Reltio there were multiple events and test got blockedTest was repeated manually and passed with successGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/395/testReport/[73] SUCCESS[0] FAILED[0] REPEATEDGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/332/testReport/[74] SUCCESS[0] FAILED[1] REPEATEDone failed test -  com.COMPANY.mdm.tests.events.COMPANYGlobalCustomerIdSearchOnLostMergeEntitiesTest.testrepeated from local PC one more time by Mikołaj Morawskiduring run on Reltio there were multiple events and test got blockedTest was repeated manually and passed with successTests ready and approved:approved by: Mikołaj MorawskiRelease ready and approved:approved by: Mikołaj MorawskiSTAGE deployment details:STAGE deployment date:Wed Feb 14 08:57:24 UTC 2024Deployment approved:approved by: Mikołaj MorawskiDeployed by:Mikołaj MorawskiENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/342/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/161/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/378/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/220/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/243/SUCCESS PROD deployment report:PROD deployment date:Thu Feb 29 09:29:58 UTC 2024Deployment approved:approved by: Mikołaj MorawskiDeployed by:Filip SądowiczENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/269/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/238/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/284/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/200/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/239/SUCCESS"
},
{
"title": "4.1.31",
"pageID": "401024639",
"pageLink": "/display/GMDM/4.1.31",
"content": "Release report:Release:4.1.31Release date:Fri Mar 01 12:21:23 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Kacper UrbańskiPlanned GO-LIVE:Mon Mar 04StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/98/SUCCESS CHANGELOG:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/98/artifact/CHANGELOG.md/*view*/Unit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/98/testReport/SUCCESSTODOIntegration tests:Execution date: N/AExecuted by: N/AAMERN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AAPACN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AEMEAN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AGBL(EX-US)N/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AGBLUSN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/ATests ready and approved:approved by: N/ARelease ready and approved:approved by: Kacper UrbańskiSTAGE deployment details:STAGE deployment date:TODO Wed Jan 17 09:35:31 UTC 2024Deployment approved:approved by: Kacper UrbańskiDeployed by:TODOENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/344/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/163/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/385/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/222/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/245/SUCCESS PROD deployment report:PROD deployment date:TODO Wed Jan 17 09:35:31 UTC 2024Deployment approved:approved by: Kacper UrbańskiDeployed by:TODOENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/275/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/239/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/288/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/202/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/241/SUCCESS"
},
{
"title": "4.1.29",
"pageID": "401613066",
"pageLink": "/display/GMDM/4.1.29",
"content": "Release report:Release:4.1.29Release date:Wed Feb 28 10:32:26 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Kacper UrbańskiPlanned GO-LIVE:Thu Mar 07 (in 1 weeks)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/472/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/4c3f8a5fc460bb0cc20e55f736850f2416b6e9f3#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/472/testReport/SUCCESSIntegration tests:Execution date: Wed Feb 28Executed by: Mikołaj MorawskiAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/407/testReport/[84] SUCCESS[0] FAILED[1] REPEATEDone failed test - com.COMPANY.mdm.tests.events.COMPANYGlobalCustomerIdTest.testrepeated from local PC one more time by Mikołaj Morawskiduring run on Reltio there were multiple events and test got blockedTest was repeated manually and passed with successAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/350/testReport/[66] SUCCESS[18] FAILED[3] REPEATEDAll [18] DCR tests failed due to RDM issue on Reltio side:same set of tests is successful on EMEA and AMER so logic is working correctlyRCA:Repeated tests:repeated from local PC one more time by Mikołaj Morawskiduring run on Reltio there were multiple events and test got blockedTest was repeated manually and passed with successEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/501/testReport/[84] SUCCESS[0] FAILED[3] REPEATEDRepeated tests:repeated from local PC one more time by Mikołaj Morawskiduring run on Reltio there were multiple events and test got blockedTest was repeated manually and passed with successGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/411/testReport/[72] SUCCESS[0] FAILED[0] REPEATEDGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/349/testReport/[74] SUCCESS[0] FAILED[0] REPEATEDTests ready and approved:approved by: Mikołaj MorawskiRelease ready and approved:approved by: Mikołaj MorawskiSTAGE deployment details:STAGE deployment date:Wed Feb 28 11:17:34 UTC 2024Deployment approved:approved by: Mikołaj MorawskiDeployed by:Kacper UrbańskiENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_amer_nprod_amer-stage/343/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_amer_nprod_amer-stage/343/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_emea_nprod_emea-stage/382/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_emea_nprod_gbl-stage/221/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_amer_nprod_gblus-stage/244/SUCCESS PROD deployment report:PROD deployment date:TODODeployment approved:approved by: Mikołaj MorawskiDeployed by:Rafał KućENV:LinkStatusDetailsAMERTODOSUCCESSAPACTODOSUCCESSEMEATODOSUCCESSGBL(EX-US)TODOSUCCESSGBLUSTODOSUCCESS"
},
{
"title": "4.3.0",
"pageID": "408556244",
"pageLink": "/display/GMDM/4.3.0",
"content": "Release report:Release:4.3.0Release date:Thu Mar 14 11:30:13 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Mikołaj MorawskiPlanned GO-LIVE:Tue Mar 21 (in 1 weeks)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/477/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/7d6036dfb79366537f79272b026ab24ec1ea1b62#CHANGELOG.mdhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/d30b468528cb98adc181b4e5d192c776328d70e8#CHANGELOG.mdhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/73bdcaaa0997b156ce79728af6c90dfd0f3cfa1b#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/477/testReport/SUCCESSIntegration tests:Execution date: Thu Mar 14Executed by: Mikołaj MorawskiAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/419/testReport/[81] SUCCESS[0] FAILED[3] REPEATEDDCR tests failed due to RDM issue on Reltio side:same set of tests is successful on EMEA and AMER so logic is working correctlyRCA: expected:<A[UTO_REJECTED]> but was:<A[uto Rejected]>Repeated tests:repeated from local PC one more time by Mikołaj MorawskiTest was repeated manually and passed with successAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/359/testReport/[89] SUCCESS[0] FAILED[0] REPEATEDEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/511/testReport/[89] SUCCESS[0] FAILED[0] REPEATEDGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/420/testReport/[72] SUCCESS[0] FAILED[0] REPEATEDGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/358/testReport/[74] SUCCESS[0] FAILED[0] REPEATEDTests ready and approved:approved by: Mikołaj MorawskiRelease ready and approved:approved by: Mikołaj MorawskiSTAGE deployment details:STAGE deployment date:Thu Mar 14 14:48:33 UTC 2024Deployment approved:approved by: Mikołaj MorawskiDeployed by:Mikołaj MorawskiENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/351/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/182/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/392/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/224/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/247/SUCCESS PROD deployment report:PROD deployment date:Thu Mar 21 11:00:42 UTC 2024Deployment approved:approved by: Mikołaj MorawskiDeployed by:Filip SądowiczENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/282/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/246/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/302/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/207/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/246/SUCCESS"
},
{
"title": "4.6.0",
"pageID": "410815299",
"pageLink": "/display/GMDM/4.6.0",
"content": "Release report:Release:4.6.0Release date:Thu Mar 21 14:01:19 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Mikołaj MorawskiPlanned GO-LIVE:Tue Mar 28 (in 1 weeks)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/484/++ https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/485/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/9a3b6fe4bdf5573691cb37d5f994fe0f93b661fa#CHANGELOG.mdhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/c9c3d307b27704264bf4d0b5fefc51bc02b78e79#CHANGELOG.mdhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/99cadba8373475c979f12b0c2ae815908b72b582#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/484/testReport/SUCCESSIntegration tests:Execution date: Thu Mar 21Executed by: Mikołaj MorawskiAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/422/testReport/[83] SUCCESS[1] FAILED[0] REPEATEDAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/365/testReport/[87] SUCCESS[2] FAILED[0] REPEATEDDCR tests failed due to RDM issue on Reltio side:same set of tests is successful  AMER so logic is working correctlyRCA:org.junit.ComparisonFailure: expected:<A[uto Rejected]> but was:<A[UTO_REJECTED]>Ignoring and approved by Mikołaj Morawski because we are still waiting for RDM configuration on DEVEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/517/testReport/[87] SUCCESS[2] FAILED[0] REPEATEDDCR tests failed due to RDM issue on Reltio side:same set of tests is successful  AMER so logic is working correctlyRCA:org.junit.ComparisonFailure: expected:<A[uto Rejected]> but was:<A[UTO_REJECTED]>Ignoring and approved by Mikołaj Morawski because we are still waiting for RDM configuration on DEVGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/426/testReport/[72] SUCCESS[0] FAILED[0] REPEATEDGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/363/testReport/[74] SUCCESS[0] FAILED[0] REPEATEDTests ready and approved:approved by: Mikołaj MorawskiRelease ready and approved:approved by: Mikołaj MorawskiSTAGE deployment details:STAGE deployment date:Thu Mar 26 08:01:19 UTC 2024Deployment approved:approved by: Mikołaj MorawskiDeployed by:Mikołaj MorawskiENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_amer_nprod_amer-stage/355/SUCCESSAPACN/A (blocked due to VOD project)SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_emea_nprod_emea-stage/398/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_amer_nprod_gblus-stage/252/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/job/deploy_mdmhub_emea_nprod_gbl-stage/228/SUCCESS PROD deployment report:PROD deployment date:Thu Mar 28 09:23:52 UTC 2024Deployment approved:approved by: Mikołaj MorawskiDeployed by:Filip SądowiczENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/290/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/251/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/307/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/210/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/251/SUCCESS"
},
{
"title": "4.9.0",
"pageID": "415995497",
"pageLink": "/display/GMDM/4.9.0",
"content": "Release report:Release:4.9.0Release date:Thu Apr 10 10:01:19 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Rafał KućPlanned GO-LIVE:Tue Apr 11 (in 1 day)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/491/FAILEDThe code has been released but job failed because of issue related to docker cleanupCHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0467698f97b08623c8edc9f134ea2156737c8df7#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/491/testReport/SUCCESSIntegration tests:Execution date: Thu Apr 10Executed by: Rafał KućAMER[0] SUCCESS[0] FAILED[0] REPEATEDAPACSkipped due to development of IoD project[0] SUCCESS[0] FAILED[0] REPEATEDEMEA[0] SUCCESS[0] FAILED[0] REPEATEDGBL(EX-US)[0] SUCCESS[0] FAILED[0] REPEATEDGBLUS[0] SUCCESS[0] FAILED[0] REPEATEDTests ready and approved:approved by: Rafał KućRelease ready and approved:approved by: Rafał KućSTAGE deployment details:STAGE deployment date:Thu Apr 10 11:01:19 UTC 2024Deployment approved:approved by: Rafał KućDeployed by:Rafał KućENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/363/SUCCESSAPACN/A (blocked due to VOD project)SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/408/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/SUCCESS PROD deployment report:PROD deployment date:Thu Apr 11 09:23:52 UTC 2024Deployment approved:approved by: Rafał KućDeployed by:Rafał KućENV:LinkStatusDetailsAMERSUCCESSAPACSUCCESSEMEASUCCESSGBL(EX-US)SUCCESSGBLUSSUCCESS"
},
{
"title": "4.10.0",
"pageID": "415212536",
"pageLink": "/display/GMDM/4.10.0",
"content": "Release report:Release:4.10.0Release date:Thu Apr 18 19:03:35 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Wed 24 (in 1 weeks)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/492/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/2939c70fcc57caa8040a895889c88af99a396665#CHANGELOG.mdhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0467698f97b08623c8edc9f134ea2156737c8df7#CHANGELOG.mdhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/d110ea29c10875123e738d32eb166875db7a6948#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/492/testReport/SUCCESSIntegration tests:Execution date: Thu Apr 18Executed by: Krzysztof PrawdzikAMER[85] SUCCESS[0] FAILED[0] REPEATEDAPAC[89] SUCCESS[0] FAILED[0] REPEATEDEMEA[89] SUCCESS[0] FAILED[0] REPEATEDGBL(EX-US)[72] SUCCESS[0] FAILED[0] REPEATEDGBLUS[74] SUCCESS[0] FAILED[0] REPEATEDTests ready and approved:approved by: Krzysztof PrawdzikRelease ready and approved:approved by: Mikołaj MorawskiSTAGE deployment details:STAGE deployment date:Thu Apr 18 19:57:21 UTC 2024Deployment approved:approved by: Mikołaj MorawskiDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/369/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/202/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/413/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/236/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/261/SUCCESS PROD deployment report:PROD deployment date:Thu Apr 25 ??:??:?? UTC 2024Deployment approved:approved by: Mikołaj MorawskiDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERSUCCESSAPACSUCCESSEMEASUCCESSGBL(EX-US)SUCCESSGBLUSSUCCESS"
},
{
"title": "4.11.0",
"pageID": "416001899",
"pageLink": "/display/GMDM/4.11.0",
"content": "Release report:Release:4.11.0Release date:Tue Apr 23 10:41:13 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Mon Apr 29 (in 1 week)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/493/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/20128ed85fda3830ebbb2874f7cd9cecd3031e18#CHANGELOG.mdhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/2939c70fcc57caa8040a895889c88af99a396665#CHANGELOG.mdhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0467698f97b08623c8edc9f134ea2156737c8df7#CHANGELOG.mdhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/d110ea29c10875123e738d32eb166875db7a6948#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/493/testReport/SUCCESSIntegration tests:Execution date: Tue Apr 23Executed by: Krzysztof PrawdzikAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/447/testReport/[84] SUCCESS[0] FAILED[0] REPEATEDAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/382/testReport/[93] SUCCESS[0] FAILED[8] REPEATEDpart of CHina tests failed due to some timeout:RCA: Action timeout after 360000 milliseconds.Failed to receive message on endpoint: 'apac-dev-out-full-mde-cn'Repeated tests:repeated from local PC one more time by Krzysztof PrawdzikTest was repeated manually and passed with successEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/539/testReport/[89] SUCCESS[0] FAILED[0] REPEATEDGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/445/testReport/[72] SUCCESS[0] FAILED[0] REPEATEDGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/386/testReport/[74] SUCCESS[0] FAILED[0] REPEATEDTests ready and approved:approved by: Krzysztof PrawdzikRelease ready and approved:approved by: Mikołaj MorawskiSTAGE deployment details:STAGE deployment date:Tue Apr 23 11:26:52 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/371/Tue Apr 23 SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/204/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/418/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/237/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/262/SUCCESS PROD deployment report:PROD deployment date:Mon Apr 29 08:37:50 UTC 2024Deployment approved:approved by: Mikołaj MorawskiDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/304/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/256/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/323/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/215/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/258/SUCCESS"
},
{
"title": "4.11.1",
"pageID": "415221783",
"pageLink": "/display/GMDM/4.11.1",
"content": "Release report:Release:4.11.1Release date:Wed May 08 08:16:41 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Wed May 08 (same day)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/101/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/dbe984a2a9bb73ba141aad9386d741fd3fc8334d#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/493/testReport/SUCCESSIntegration tests:Execution date: N/AExecuted by: N/AAMERN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AAPACN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AEMEAN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AGBL(EX-US)N/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AGBLUSN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/ATests ready and approved:approved by: N/ARelease ready and approved:approved by: STAGE deployment details:STAGE deployment date:Wed May 08 08:54:16 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/374/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/209/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/420/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/239/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/272/SUCCESS PROD deployment report:PROD deployment date:Wed May 08 10:07:44 UTC 2024Deployment approved:approved by: Deployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/307/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/261/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/332/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/218/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/263/SUCCESS"
},
{
"title": "4.12.0",
"pageID": "425492972",
"pageLink": "/display/GMDM/4.12.0",
"content": "Release report:Release:4.12.0Release date:Mon May 13 12:03:50 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Thu May 16StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/2/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/dc117aa31a81375f4572ca68a22491d02094e91e#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/2/testReport/SUCCESSIntegration tests:Execution date: Mon May 13Executed by: Krzysztof PrawdzikAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/463/testReport/[81] SUCCESS[3] FAILED[0] REPEATEDRCA: Tenant [wn60kG248ziQSMW] is not registered.APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/398/testReport/[99] SUCCESS[0] FAILED[2] REPEATEDone of China tests failed due to timeout:RCA: Action timeout after 360000 milliseconds.Failed to receive message on endpoint: 'apac-dev-out-full-mde-cn'Repeated tests:repeated from local PC one more time by Krzysztof PrawdzikTest was repeated manually and passed with successEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/554/testReport/[88] SUCCESS[1] FAILED[0] REPEATEDRCA: Tenant [wn60kG248ziQSMW] is not registered.GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/459/testReport/[72] SUCCESS[0] FAILED[0] REPEATEDGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/401/testReport/[73] SUCCESS[0] FAILED[1] REPEATEDone of the tests failed due to unsufficient time to get proper eventType:RCA: Validation failed: Values not equal for element '$.eventType', expected 'HCP_MERGED' but was 'ENTITY_POTENTIAL_LINK_FOUND'Repeated test:repeated from local PC one more time by Krzysztof PrawdzikTest was repeated manually with increased number of retries and passed with successTests ready and approved:approved by: Krzysztof PrawdzikRelease ready and approved:approved by: Krzysztof PrawdzikSTAGE deployment details:STAGE deployment date:Mon May 13 12:52:59 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/376/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/211/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/422/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/241/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/275/SUCCESS PROD deployment report:PROD deployment date:Thu May 16 09:35:26 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/309/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/263/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/336/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/220/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/266/SUCCESS"
},
{
"title": "4.12.1",
"pageID": "425136247",
"pageLink": "/display/GMDM/4.12.1",
"content": "Release report:Release:4.12.1Release date:Tue May 21 08:44:41 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Tue May 21 (same day)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/102/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0849434b3c67a63f36b13211cb19c23e4c77b25e#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/102/testReport/SUCCESSIntegration tests:Execution date: N/AExecuted by: N/AAMERN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AEMEAN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AGBL(EX-US)N/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AGBLUSN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/ATests ready and approved:approved by: N/ARelease ready and approved:approved by: Krzysztof PrawdzikSTAGE deployment details:STAGE deployment date:Tue May 21 09:26:46 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/377/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/212/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/423/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/242/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/279/SUCCESS PROD deployment report:PROD deployment date:Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/314/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/265/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/340/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/221/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/270/SUCCESS"
},
{
"title": "4.14.0",
"pageID": "430082856",
"pageLink": "/display/GMDM/4.14.0",
"content": "Release report:Release:4.14.0Release date:Wed May 29 15:14:52 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jun 6StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/4/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0d962b08c9a6caa4520868f8c33a577c85356a8f#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/4/testReport/SUCCESSIntegration tests:Execution date: Wed May 29Executed by: Krzysztof PrawdzikAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/473/[83] SUCCESS[0] FAILED[1] REPEATEDRecent changes in com.COMPANY.mdm.tests.dcr2.DCR2ServiceTest.shouldInactivateHCP test has caused its instability.repeated from local PC one more time by Krzysztof PrawdziikTest was repeated manually and passed with successfix for this test is being praperedAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/413/[99] SUCCESS[0] FAILED[1] REPEATEDEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/565/[88] SUCCESS[0] FAILED[1] REPEATEDRecent changes in com.COMPANY.mdm.tests.dcr2.DCR2ServiceTest.shouldInactivateHCP test has caused its instability.repeated from local PC one more time by Krzysztof PrawdziikTest was repeated manually and passed with successfix for this test is being praperedGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/469/[72] SUCCESS[0] FAILED[0] REPEATEDGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/411/testReport/[74] SUCCESS[0] FAILED[0] REPEATEDTests ready and approved:approved by: Krzysztof PrawdzikRelease ready and approved:approved by: Krzysztof PrawdzikSTAGE deployment details:STAGE deployment date:Wed May 29 16:36:37 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/379/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/214/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/426/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/244/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/287/SUCCESS STAGE test phase details:Verification date06 Jun 2024 17:05 - 18:00 + 07 Jun 2024 12:15Verification byBachanowicz, Mieczysław (Irek) DashboardHintsStatusDetailsMDMHUB / MDMHUB Component errorsIncreased number of alerts → there's certainly something wrongSUCCESSAPAC NPRODEMEA NPRODMDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issueSUCCESSMDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)SUCCESSBatch serviceEntity enricherMap channelMDM AuthMDM ReconciliationRaw dataGeneral / Snowflake QC TrendsQuick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issueSUCCESSKubernetes / Vertical Pod Autoscaler (VPA)Change in memory requirement before and after deployment → potential issue not verifiedKubernetes / K8s Cluster Usage StatisticsGood for PROD environments since NPROD is to prone to project specific loadsSUCCESSKubernetes / Pod MonitoringComponent specific analysis, especially good for the ones updated within latest release (check news fragments)APAC DEVGeneral / kubernetes-persistent-volumes Storage trend over time SUCCESSGeneral / Alerts Statistics Increase after release → potential issue SUCCESSAPAC NPRODGBLUS NPRODGBLGeneral / SSL Certificates and Endpoint AvailabilityLower widget, multiple stacked endpoints at the same time for a long periodSUCCESSPROD deployment report:PROD deployment date:Thu Jun 06 11:37:04 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/322/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/268/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/349/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/224/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/273/SUCCESSPROD deploy hypercare details:Verification date13 Jun 2024 12:37Verification byBachanowicz, Mieczysław (Irek) DashboardHintsStatusDetailsMDMHUB / MDMHUB Component errorsIncreased number of alerts → there's certainly something wrongSUCCESSMDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issueSUCCESSMDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)General / Snowflake QC TrendsQuick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issueSUCCESSKubernetes / Vertical Pod Autoscaler (VPA)Change in memory requirement before and after deployment → potential issue Kubernetes / K8s Cluster Usage StatisticsGood for PROD environments since NPROD is to prone to project specific loadsSUCCESSKubernetes / Pod MonitoringComponent specific analysis, especially good for the ones updated within latest release (check news fragments)SUCCESSGeneral / kubernetes-persistent-volumes Storage trend over time SUCCESSGeneral / Alerts Statistics Increase after release → potential issue SUCCESSGeneral / SSL Certificates and Endpoint AvailabilityLower widget, multiple stacked endpoints at the same time for a long periodSUCCESS"
},
{
"title": "4.12.2",
"pageID": "430083918",
"pageLink": "/display/GMDM/4.12.2",
"content": "Release report:Release:4.12.2Release date:Tue Jun 04 12:19:52 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Tue Jun 4 (same day)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/103/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0abf8b37a2ac6b27c093cba3f3288ebd2c9ebfc4#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/103/testReport/SUCCESSIntegration tests:Execution date: N/AExecuted by: N/AAMERN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AEMEAN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AGBL(EX-US)N/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AGBLUSN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/ATests ready and approved:approved by: N/ARelease ready and approved:approved by: Krzysztof PrawdzikSTAGE deployment details:STAGE deployment date:Tue Jun 04 13:27:51 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERSUCCESSAPACSUCCESSEMEASUCCESSGBL(EX-US)SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/288/SUCCESS PROD deployment report:PROD deployment date:Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/320/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/267/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/347/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/272/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/223/SUCCESS"
},
{
"title": "4.14.1",
"pageID": "430087408",
"pageLink": "/display/GMDM/4.14.1",
"content": "Release report:Release:4.14.1Release date:Tue Jun 11 10:27:15 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Tue Jun 11 (same day)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/105/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/69c634998c0b05dd2ed74677bcb638c55213b940#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/105/testReport/SUCCESSIntegration tests:Execution date: N/AExecuted by: N/AAMERN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AEMEAN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AGBL(EX-US)N/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AGBLUSN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/ATests ready and approved:approved by: N/ARelease ready and approved:approved by: Krzysztof PrawdzikSTAGE deployment details:STAGE deployment date:Tue Jun 11 11:27:31 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/383/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/218/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/429/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/246/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/290/SUCCESS STAGE test phase details:Verification date06 Jun 2024 17:05 - 18:00 + 07 Jun 2024 12:15Verification byBachanowicz, Mieczysław (Irek) DashboardHintsStatusDetailsMDMHUB / MDMHUB Component errorsIncreased number of alerts → there's certainly something wronge.g. SUCCESSMDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issueMDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)General / Snowflake QC TrendsQuick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issueKubernetes / Vertical Pod Autoscaler (VPA)Change in memory requirement before and after deployment → potential issue Kubernetes / K8s Cluster Usage StatisticsGood for PROD environments since NPROD is to prone to project specific loadsKubernetes / Pod MonitoringComponent specific analysis, especially good for the ones updated within latest release (check news fragments)General / kubernetes-persistent-volumes Storage trend over time General / Alerts Statistics Increase after release → potential issue General / SSL Certificates and Endpoint AvailabilityLower widget, multiple stacked endpoints at the same time for a long periodPROD deployment report:PROD deployment date:Tue Jun 11 12:40:35 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/326/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/270/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/354/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/225/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/277/SUCCESSPROD deploy hypercare details:Verification dateusually Deployment_date + 24-48hVerification bye.g. Bachanowicz, Mieczysław (Irek) DashboardHintsStatusDetailsMDMHUB / MDMHUB Component errorsIncreased number of alerts → there's certainly something wronge.g. SUCCESSMDMHUB / MDMHUB KPIsSpikes, especially wide ones, suggest potential issueMDMHUB / MDMHUB Components resourceComponent specific analysis, especially good for the ones updated within latest release (check news fragments)General / Snowflake QC TrendsQuick and easy way to determine if there's something wrong with QC. Any change (lower/higher) → potential issueKubernetes / Vertical Pod Autoscaler (VPA)Change in memory requirement before and after deployment → potential issue Kubernetes / K8s Cluster Usage StatisticsGood for PROD environments since NPROD is to prone to project specific loadsKubernetes / Pod MonitoringComponent specific analysis, especially good for the ones updated within latest release (check news fragments)General / kubernetes-persistent-volumes Storage trend over time General / Alerts Statistics Increase after release → potential issue General / SSL Certificates and Endpoint AvailabilityLower widget, multiple stacked endpoints at the same time for a long period"
},
{
"title": "4.15.0",
"pageID": "430350581",
"pageLink": "/display/GMDM/4.15.0",
"content": "Release report:Release:4.15.0Release date:Thu Jun 13 15:45:35 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jun 20 (in 1 week)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/8/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/6aab2f8a14ba7406e1e2de60a81a4af2d34d6094#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/4/testReport/SUCCESSIntegration tests:Execution date: Executed by: Krzysztof PrawdzikAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/485/[84] SUCCESS[0] FAILED[0] REPEATEDAPAC[99] SUCCESS[0] FAILED[1] REPEATEDEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/575/[89] SUCCESS[0] FAILED[0] REPEATEDGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/482/[72] SUCCESS[0] FAILED[0] REPEATEDGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/422/[74] SUCCESS[0] FAILED[0] REPEATEDTests ready and approved:approved by: Krzysztof PrawdzikRelease ready and approved:approved by: Krzysztof PrawdzikSTAGE deployment details:STAGE deployment date:Thu Jun 13 17:46:23 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/385/SUCCESSDeployment log:4.15.0-amer-stage-deploy.logAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/220/SUCCESSDeployment log:4.15.0-apac-stage-deploy.logEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/431/SUCCESSDeployment log:4.15.0-emea-stage-deploy.logGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/248/SUCCESSDeployment log:4.15.0-gbl-stage-deploy.logGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/292/SUCCESS Deployment log:4.15.0-gblus-stage-deploy.logSTAGE test phase details:Verification date14 Jun 2024 15:30 - 16:20Verification byBachanowicz, Mieczysław (Irek) DashboardHintsStatusDetailsMDMHUB / MDMHUB Component errorsSUCCESSMDMHUB / MDMHUB KPIsSUCCESSMDMHUB / MDMHUB Components resourceSUCCESSAMER-STAGE - HTTP 401 - known issue with authorization to OneKey (IB)General / Snowflake QC TrendsSUCCESSKubernetes / K8s Cluster Usage StatisticsSUCCESSKubernetes / Pod MonitoringSUCCESSAPAC DEV - Damian's tests + Krzysztof published old version for a moment which behave strangle on APAC DEV only (selective router)General / kubernetes-persistent-volumes SUCCESSEMEA-STAGEGeneral / Alerts Statistics Why there are duplicates with _ and - ?EMEA-NPROD - Marek knows about this ? APAC-STAGE - something wrong with monitoring? constant "1" independent from timeframe?GBLUS-STAGE - Greg is working on it - note from karmaGeneral / SSL Certificates and Endpoint AvailabilityAPAC-NPROD - real issue or monitoring false positives? EMEA-NPROD - PROD deployment report:PROD deployment date:Thu Jun 20 11:52:28 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/329/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/272/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/363/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/227/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/279/SUCCESSPROD deploy hypercare details:Verification date21 Jun 2024 15:45 + review 24 Jun 2024 11:00 (Bachanowicz, Mieczysław (Irek)) Verification byBachanowicz, Mieczysław (Irek) + Prawdzik, Krzysztof DashboardStatusDetailsMDMHUB / MDMHUB Component errorsDCR Oneky change was deployed without extensive testing od NPROD. Verified with Paweł - no major risks to leave it unattained for the weekend.GBLUS-PROD - mdm-manager, peak processingAPAC-PROD - onekey. 24 Jun 2024 : Did not happen since then. APAC-PROD mdm-managerAPAC-PROD DCR2 ServiceEMEA-PROD map-channel, strange errors GBL-PROD pforcerx channel - \n MR-9012\n -\n Getting issue details...\n STATUS\n   24 Jun 2024 Did not happen since then.  GBL-PROD - Created \n MR-9011\n -\n Getting issue details...\n STATUS\n MDMHUB / MDMHUB KPIs  APAC-PROD to Greg → IB: This is a recurring thing. Happens every week.   EMEA-PROD → IB: This is a recurring thing. Happens every week.   GBL-PROD → IB: This is a recurring thing. Happens every week. MDMHUB / MDMHUB Components resourceGeneral / Snowflake QC TrendsKubernetes / K8s Cluster Usage StatisticsKubernetes / Pod MonitoringEMEA-PROD - known issue during deploymentGeneral / kubernetes-persistent-volumes General / Alerts Statistics AMER-PROD zookeeper reelectionGBLUS-PROD high processing, corresponds with manager issueEMEA-PROD deployment issueGeneral / SSL Certificates and Endpoint Availability"
},
{
"title": "4.16.0",
"pageID": "438895667",
"pageLink": "/display/GMDM/4.16.0",
"content": "Release report:Release:4.16.0Release date:Mon Jun 24 15:13:56 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jun 27 (in 3 days)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/9/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/0789f75320df48915b3eaa82d1669bfe2fdc0668#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/9/testReport/SUCCESSIntegration tests:Execution date: Tue Jun 25 17:00:03 UTC 2024Executed by: Krzysztof PrawdzikAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/493/[85] SUCCESS[0] FAILED[0] REPEATEDAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/429/[102] SUCCESS[0] FAILED[0] REPEATEDEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/585/[89] SUCCESS[0] FAILED[1] REPEATEDGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/489/[73] SUCCESS[0] FAILED[0] REPEATEDGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/429/[75] SUCCESS[0] FAILED[0] REPEATEDTests ready and approved:approved by: Krzysztof PrawdzikRelease ready and approved:approved by: Krzysztof PrawdzikSTAGE deployment details:STAGE deployment date:Mon Jun 24 21:05:13 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/386/SUCCESSDeployment log:4.16.0-amer-stage-deploy.logAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/222/SUCCESSDeployment log:4.16.0-apac-stage-deploy.logEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/434/SUCCESSDeployment log:4.16.0-emea-stage-deploy.logGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/249/SUCCESSDeployment log:4.16.0-gbl-stage-deploy.logGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/293/SUCCESS Deployment log:4.16.0-gblus-stage-deploy.logSTAGE test phase details:Verification date27 Jun 2024  10:45 - 11:45Verification byBachanowicz, Mieczysław (Irek)  + Prawdzik, Krzysztof DashboardStatusDetailsMDMHUB / MDMHUB Component errors  AMER-STAGE - small issues with COMPANYGlobalCustomerID (COMPANY Customer Id: 02-100373164 does not exist in Reltio or is deactivated)  APAC-STAGE - AWS issueAWS does not show any problems with their S3 services   EMEA-STAGE, manager  GLB-STAGE, manager MDMHUB / MDMHUB KPIs  EMEA-STAGE MDMHUB / MDMHUB Components resourceGeneral / Snowflake QC TrendsKubernetes / K8s Cluster Usage StatisticsKubernetes / Pod Monitoring  APAC-STAGE - Mon morning - HCONames memory reload, config update by KarolGeneral / kubernetes-persistent-volumes General / Alerts Statistics    APAC-STAGE - Friday, 17:00, a lot of strange errors, corelates with AWS issue   AMER-STAGE + APAC-STAGE + GBLUS - stage- Grzesiek - wt/środa - Snowflake na Stageach?4General / SSL Certificates and Endpoint AvailabilityAPAC-NPRODPROD deployment report:PROD deployment date:Thu Jun 27 09:43:12 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/331/SUCCESSDeployment log:4.16.0-amer-prod-deploy.logAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/274/SUCCESSDeployment log:4.16.0-apac-prod-deploy.logEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/365/SUCCESSDeployment log:4.16.0-emea-prod-deploy.logGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/228/SUCCESSDeployment log:4.16.0-gbl-prod-deploy.logGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/281/SUCCESSDeployment log:4.16.0-gblus-prod-deploy.logPROD deploy hypercare details:Verification date28 Jun 2024 16-17:00Verification byBachanowicz, Mieczysław (Irek) , Prawdzik, Krzysztof  + feat. Chojnowski, Maciej , Szymczyk, Damian DashboardStatusDetailsMDMHUB / MDMHUB Component errors   AMER-PROD - mdm service 2 + OneKey - 2 examples of failed lookup codes transformation  Issue found for two DCR requests. Failed to send req to OK:  IB> Paweł - create ticket3bd7e9217a004b37a2c0cbd7afabda1f4d9e09c06b89494c950a759889cf12d0low priority issue - lepsza obsługa lookupów - często się to pojawia na różnych środowiskach (APAC-PROD) "Create dcr exception"wywałko w endpoicie OneKeylog1.txt AMER-PROD - clean NPE → create ticket to clean up such "errors"  GBLUS-PROD - single error, however huge   EMEA-PROD, map-channel, brak trace'a, kubernetes restarted component.   EMEA-PROD, minor issue, for further investigation (Krzysiek) - low prio    GBLUS-PROD, Know issue - \n MR-9011\n -\n Getting issue details...\n STATUS\n MDMHUB / MDMHUB KPIs    Publishing latency ~1year -known issue, ticket to create (IB)GBL-PRODMDMHUB / MDMHUB Components resource AMER-PROD, map channel, high CPU usage, to verify on MondaysGeneral / Snowflake QC Trends Kubernetes / K8s Cluster Usage Statistics Kubernetes / Pod Monitoring General / kubernetes-persistent-volumes  General / Alerts Statistics   GBL-PROD - confirm with Damian that's not an issueGeneral / SSL Certificates and Endpoint Availability   US-PROD IB > Ticket to create to check env selectors for us-prod"
},
{
"title": "4.17.0",
"pageID": "438899752",
"pageLink": "/display/GMDM/4.17.0",
"content": "Release report:Release:4.17.0Release date:Fri Jun 28 15:13:34 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jul 4 (in 3 days)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/10/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/14f625d0b5d47629245ed7fd0d0112e7ad5675e8#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/10/testReport/SUCCESSIntegration tests:Execution date: Executed by: Krzysztof PrawdzikAMER[85] SUCCESS[0] FAILED[0] REPEATEDAPAC[102] SUCCESS[0] FAILED[0] REPEATEDEMEA[89] SUCCESS[0] FAILED[1] REPEATEDGBL(EX-US)[73] SUCCESS[0] FAILED[0] REPEATEDGBLUS[75] SUCCESS[0] FAILED[0] REPEATEDTests ready and approved:approved by: Krzysztof PrawdzikRelease ready and approved:approved by: Krzysztof PrawdzikSTAGE deployment details:STAGE deployment date:Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERSUCCESSAPACSUCCESSEMEASUCCESSGBL(EX-US)SUCCESSGBLUSSUCCESS STAGE test phase details:Verification dateVerification byDashboardStatusDetailsMDMHUB / MDMHUB Component errorsMDMHUB / MDMHUB KPIsMDMHUB / MDMHUB Components resourceGeneral / Snowflake QC TrendsKubernetes / K8s Cluster Usage StatisticsKubernetes / Pod MonitoringGeneral / kubernetes-persistent-volumes General / Alerts Statistics General / SSL Certificates and Endpoint AvailabilityPROD deployment report:PROD deployment date:Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERSUCCESSAPACSUCCESSEMEASUCCESSGBL(EX-US)SUCCESSGBLUSSUCCESSPROD deploy hypercare details:Verification dateVerification byDashboardStatusDetailsMDMHUB / MDMHUB Component errorsMDMHUB / MDMHUB KPIsMDMHUB / MDMHUB Components resourceGeneral / Snowflake QC TrendsKubernetes / K8s Cluster Usage StatisticsKubernetes / Pod MonitoringGeneral / kubernetes-persistent-volumes General / Alerts Statistics General / SSL Certificates and Endpoint Availability"
},
{
"title": "4.16.1",
"pageID": "438900696",
"pageLink": "/display/GMDM/4.16.1",
"content": "Release report:Release:4.16.1Release date:Tue Jul 02 10:02:19 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Tue Jul 02 (same day)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/108/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/60a14c07d0421cb25ee9d1e29aa376705d20686dUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/108/testReport/SUCCESSIntegration tests:Execution date: N/AExecuted by: N/AAMERN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AAPACN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AEMEAN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AGBL(EX-US)N/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AGBLUSN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/ATests ready and approved:approved by: Krzysztof PrawdzikRelease ready and approved:approved by: Krzysztof PrawdzikSTAGE deployment details:STAGE deployment date:Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERSUCCESSAPACSUCCESSEMEASUCCESSGBL(EX-US)SUCCESSGBLUSSUCCESS STAGE test phase details:Verification date02 Jul 2024 13.00 - 14.00Verification byPrawdzik, Krzysztof DashboardStatusDetailsMDMHUB / MDMHUB Component errors MDMHUB / MDMHUB KPIs MDMHUB / MDMHUB Components resource General / Snowflake QC Trends Kubernetes / K8s Cluster Usage Statistics Kubernetes / Pod Monitoring General / kubernetes-persistent-volumes  General / Alerts Statistics  General / SSL Certificates and Endpoint Availability PROD deployment report:PROD deployment date:Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/332/SUCCESSAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/275/SUCCESSEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/369/SUCCESSGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/230/SUCCESSGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/282/SUCCESSPROD deploy hypercare details:Verification dateVerification byDashboardStatusDetailsMDMHUB / MDMHUB Component errorsMDMHUB / MDMHUB KPIsMDMHUB / MDMHUB Components resourceGeneral / Snowflake QC TrendsKubernetes / K8s Cluster Usage StatisticsKubernetes / Pod MonitoringGeneral / kubernetes-persistent-volumes General / Alerts Statistics General / SSL Certificates and Endpoint Availability"
},
{
"title": "4.18.0",
"pageID": "438900984",
"pageLink": "/display/GMDM/4.18.0",
"content": "Release report:Release:4.18.0Release date:Tue Jul 02 14:57:49 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jul 04 (in 2 days)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/11/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/14f625d0b5d47629245ed7fd0d0112e7ad5675e8#CHANGELOG.mdhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/60a14c07d0421cb25ee9d1e29aa376705d20686dhttp://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/f90e4505509822513ae8c27a48a776e3acd67c8eUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/11/testReport/SUCCESSIntegration tests:Execution date: Tue Jul 02 15:59:32 UTC 2024Executed by: Krzysztof PrawdzikAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/499/[85] SUCCESS[0] FAILED[0] REPEATEDAPAC[94] SUCCESS[1] FAILED[7] REPEATEDone of China tests failed due to timeout:RCA: Action timeout after 360000 milliseconds.Failed to receive message on endpoint: 'apac-dev-out-full-hcp-merge-cn'Repeated tests:several test failed due ro recent change of DCR tracking statues on APAC DEV on Reltio siderepeated from local PC (with updated values) one more time by Krzysztof PrawdzikTests were repeated manually and passed with successfix for these tests is being preparedEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/591/[89] SUCCESS[0] FAILED[1] REPEATEDGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/495/[73] SUCCESS[0] FAILED[0] REPEATEDGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/435/[75] SUCCESS[0] FAILED[0] REPEATEDTests ready and approved:approved by: Krzysztof PrawdzikRelease ready and approved:approved by: Krzysztof PrawdzikSTAGE deployment details:STAGE deployment date:Tue Jul 02 15:34:46 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/389/SUCCESSDeployment log:4.18.0-amer-stage-deploy.logAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/224/SUCCESSDeployment log:4.18.0-apac-stage-deploy.logEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/436/SUCCESSDeployment log:4.18.0-emea-stage-deploy.logGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/252/SUCCESSDeployment log:4.18.0-gbl-stage-deploy.logGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/295/SUCCESS Deployment log:4.18.0-gblus-stage-deploy.logSTAGE test phase details:Verification dateVerification byDashboardStatusDetailsMDMHUB / MDMHUB Component errorsMDMHUB / MDMHUB KPIsMDMHUB / MDMHUB Components resourceGeneral / Snowflake QC TrendsKubernetes / K8s Cluster Usage StatisticsKubernetes / Pod MonitoringGeneral / kubernetes-persistent-volumes General / Alerts Statistics General / SSL Certificates and Endpoint AvailabilityPROD deployment report:PROD deployment date:Thu Jul 04 08:28:26 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/333/SUCCESSDeployment log:4.18.0-amer-prod-deploy.logAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/276/SUCCESSDeployment log:4.18.0-apac-prod-deploy.logEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/371/SUCCESSDeployment log:4.18.0-emea-prod-deploy.logGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/231/SUCCESSDeployment log:4.18.0-gbl-prod-deploy.logGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/283/SUCCESSDeployment log:4.18.0-gblus-prod-deploy.logPROD deploy hypercare details:Verification date05 Jul 2024 15:30 - 17:00Verification byBachanowicz, Mieczysław (Irek), Prawdzik, Krzysztof feat Anuskiewicz, Piotr DashboardStatusDetailsMDMHUB / MDMHUB Component errors   AMER-PROD - batch-service: data input issue, OneMed job - incorrect data ← Piotr   AMER-PROD, mdm-dcr2-service: know issue: "Can't convert data to Json string"AMER-PROD, manager: Error processing request  AMER-PROD, onekey-dcr: know-issue  APAC-PROD, mdm-manager  EMEA-PROD, MAPP channelnon-cirtical - needs to be verified "later"  EMEA-PROD, manager,minor to verify cause:" avax.ws.rs.ClientErrorException: HTTP 429 Too Many Requests at"  GBL-PROD, manager - known issueMDMHUB / MDMHUB KPIs   GBLUS-PROD - why it wasn't smoothly processed? GBL-PRODMDMHUB / MDMHUB Components resource General / Snowflake QC Trends Kubernetes / K8s Cluster Usage Statistics Kubernetes / Pod Monitoring   GBLUS-PROD GBL-PROD, publisher, manager high usage   EMEA-PROD, 7d   EMEA-PRODGeneral / kubernetes-persistent-volumes  General / Alerts Statistics    AMER-PROD, empty COMPANYGlobalCustomerIdTicker raised by COMPANY to Reltio team - \n HSM-708\n -\n Getting issue details...\n STATUS\n + support.reltio.com/hc/requests/105633GBL-PROD, not an issueGBLUS-PROD, probably COMPANY manual merge/unmergeGeneral / SSL Certificates and Endpoint Availability  Schedule meeting with Marek how to deep dive to diagnose \n MR-9088\n -\n Getting issue details...\n STATUS\n \n MR-9089\n -\n Getting issue details...\n STATUS\n Kibana "Kube-events" indice contains logs from kubernets   EMEA-PROD - DCR, required further verification with Marek/Damian. "
},
{
"title": "4.18.1",
"pageID": "438317171",
"pageLink": "/display/GMDM/4.18.1",
"content": "Release report:Release:4.18.1Release date:Mon Jul 08 15:01:32 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Tue Jul 09 (in 1 day)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/109/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/446610ec20f2837570cb75c518ff0dc03bd7528f#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/master/109/testReport/SUCCESSIntegration tests:Execution date: N/AExecuted by: N/AAMERN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AAPACN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AEMEAN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AGBL(EX-US)N/A[0] SUCCESS[0] FAILED[0] REPEATEDN/AGBLUSN/A[0] SUCCESS[0] FAILED[0] REPEATEDN/ATests ready and approved:approved by: Release ready and approved:approved by: Krzysztof PrawdzikSTAGE deployment details:STAGE deployment date:(Tue Jul 09 07:07:46 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/390/SUCCESSDeployment log:4.18.1-amer-stage-deploy.logAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/225/SUCCESSDeployment log:4.18.1-apac-stage-deploy.logEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/437/SUCCESSDeployment log:4.18.1-emea-stage-deploy.logGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/254/SUCCESSDeployment log:4.18.1-gbl-stage-deploy.logGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/296/SUCCESS Deployment log:4.18.1-gblus-stage-deploy.logSTAGE test phase details:Verification date09 Jul 2024 12:00Verification byPrawdzik, Krzysztof DashboardStatusDetailsMDMHUB / MDMHUB Component errors MDMHUB / MDMHUB KPIs MDMHUB / MDMHUB Components resource General / Snowflake QC Trends Kubernetes / K8s Cluster Usage Statistics Kubernetes / Pod Monitoring General / kubernetes-persistent-volumes  General / Alerts Statistics  General / SSL Certificates and Endpoint Availability PROD deployment report:PROD deployment date:Thu Jul 04 08:28:26 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/335/SUCCESSDeployment log:4.18.1-amer-prod-deploy.logAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/278/SUCCESSDeployment log:4.18.1-apac-prod-deploy.logEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/374/SUCCESSDeployment log:4.18.1-emea-prod-deploy.logGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/235/SUCCESSDeployment log:4.18.1-gbl-prod-deploy.logGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/285/SUCCESSDeployment log:4.18.1-gblus-prod-deploy.logPROD deploy hypercare details:Verification dateVerification byDashboardStatusDetailsMDMHUB / MDMHUB Component errorsMDMHUB / MDMHUB KPIsMDMHUB / MDMHUB Components resourceGeneral / Snowflake QC TrendsKubernetes / K8s Cluster Usage StatisticsKubernetes / Pod MonitoringGeneral / kubernetes-persistent-volumes General / Alerts Statistics General / SSL Certificates and Endpoint Availability"
},
{
"title": "4.19.0",
"pageID": "438317571",
"pageLink": "/display/GMDM/4.19.0",
"content": "Release report:Release:4.19.0Release date:Tue Jul 09 14:29:10 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jul 11 (in 2 days)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/12/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/106376c5e3a96725ae10c4eff57dc19157549d1c#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/12/testReport/SUCCESSIntegration tests:Execution date: Tue Jul 09 17:00:03 UTC 2024Executed by: Krzysztof PrawdzikAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/504/[85] SUCCESS[0] FAILED[0] REPEATEDAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/444/[98] SUCCESS[0] FAILED[4] REPEATEDEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/597/[90] SUCCESS[0] FAILED[0] REPEATEDGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/500/[72] SUCCESS[1] FAILED[0] REPEATEDGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/440/[75] SUCCESS[0] FAILED[0] REPEATEDTests ready and approved:approved by: Krzysztof PrawdzikRelease ready and approved:approved by: Krzysztof PrawdzikSTAGE deployment details:STAGE deployment date:Tue Jul 09 15:15:26 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/391/SUCCESSDeployment log:APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/226/SUCCESSDeployment log:EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/438/SUCCESSDeployment log:GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/255/SUCCESSDeployment log:GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/297/SUCCESS Deployment log:STAGE test phase details:Verification date11 Jul 2024 11:00 - 12:00Verification byPrawdzik, Krzysztof , Bachanowicz, Mieczysław (Irek)  feat Szymanska, KlaudiaDashboardStatusDetailsMDMHUB / MDMHUB Component errors MDMHUB / MDMHUB KPIs   GBL-STAGEMDMHUB / MDMHUB Components resourceGeneral / Snowflake QC TrendsKubernetes / K8s Cluster Usage StatisticsKubernetes / Pod MonitoringGeneral / kubernetes-persistent-volumes General / Alerts Statistics   APAC-STAGE - known issue?  APAC-STAGE, kong 503, kube job completion? pod crash looping pdk?General / SSL Certificates and Endpoint Availability  Need to monitor production deployment for this irregularitiesAMER-NPROD    APAC-DEV, dcr, Klaudia: bean issue, strange, nothing corelated to recent changes in code. Error: "requestScopedExchange"   EMEA-QA,  dcr, Klaudia checked logs, nothing unusual. Need to increase logs in blackbox exporterPROD deployment report:PROD deployment date:Thu Jul 11 10:17:20 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/338/SUCCESSDeployment log:APAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/279/SUCCESSDeployment log:EMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/376/SUCCESSDeployment log:GBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/236/SUCCESSDeployment log:GBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/289/SUCCESSDeployment log:PROD deploy hypercare details:Verification date12 Jul 2024 13:30 - 14:30 + warning revalidation on 15 Jul 2024 10:00Verification byBachanowicz, Mieczysław (Irek) , Prawdzik, Krzysztof DashboardStatusDetailsMDMHUB / MDMHUB Component errors   APAC-PROD, manager\n MR-9097\n -\n Getting issue details...\n STATUS\n \n MR-9098\n -\n Getting issue details...\n STATUS\n   GBL-PRODWe need to meetup with Grzesiek and verify this issues MDMHUB / MDMHUB KPIs  EMEA-PROD GBL-PRODMDMHUB / MDMHUB Components resourceGeneral / Snowflake QC TrendsKubernetes / K8s Cluster Usage StatisticsEMEA-PRODKubernetes / Pod Monitoring GBL-PRODVerification on Monday - high memory usageGeneral / kubernetes-persistent-volumes General / Alerts Statistics   AMER-PROD disk space AMER-PRODPublisher broken eventsZookeper - info from Marek in Karma that it's nothing to be afraid ofQuality gateway - confirmed with Piotr GBLUS-PRODPublisher broken eventsSnowflake EMEA-PRODHigh load - confirmed with Marek and Piotr GBL-PRODHigh eta - china reaload (info in karma) GBLUS-PRODQuality gateway - Dominiq addressed it to Deloite (info from Piotr)Confirmed with PiotrGeneral / SSL Certificates and Endpoint Availability PRODy "
},
{
"title": "4.21.0",
"pageID": "438910809",
"pageLink": "/display/GMDM/4.21.0",
"content": "Release report:Release:4.21.0Release date:Tue Jul 09 14:29:10 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jul 18 (in 2 days)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/18/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/ef6b59b63a3800a08e98c2e36e2853d45ed97395#CHANGELOG.mdUnit tests:SUCCESSIntegration tests:Execution date: Sun Jul 14 17:00:05 UTC 2024Executed by: Krzysztof PrawdzikAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/510/[85] SUCCESS[0] FAILED[0] REPEATEDAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/450/[102] SUCCESS[0] FAILED[0] REPEATEDEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/600/[90] SUCCESS[0] FAILED[0] REPEATEDGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/505/[72] SUCCESS[1] FAILED[0] REPEATEDGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/443/[75] SUCCESS[0] FAILED[0] REPEATEDTests ready and approved:approved by: Krzysztof PrawdzikRelease ready and approved:approved by: Krzysztof PrawdzikSTAGE deployment details:STAGE deployment date:Tue Jul 16 22:15:07 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/400/SUCCESSDeployment log:4.21.0-amer-stage-deploy.logAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/238/SUCCESSDeployment log:4.21.0-apac-stage-deploy.logEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/446/SUCCESSDeployment log:4.21.0-emea-stage-deploy.logGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/260/SUCCESSDeployment log:4.21.0-gbl-stage-deploy.logGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/305/SUCCESS Deployment log:4.21.0-gblus-stage-deploy.logSTAGE test phase details:Verification date18 Jul 2024 13:00Verification byPrawdzik, Krzysztof + Bachanowicz, Mieczysław (Irek)  feat Grygorczuk, Marek DashboardStatusDetailsMDMHUB / MDMHUB Component errors AMER-NPROD - know issue during deployment APAC-STAGE - dcr servce 2 create ticket to change error 400 to warning to verify if these publishing errors may cause some synchronization issues in SF Callback - Java Heap Space? Memory issue. Caused by APAC-PROD to APAC-STAGE cloningMDMHUB / MDMHUB KPIs  APAC-STAGE - env cloning  EMEA-STAGE, 1h+ long publishing timesMDMHUB / MDMHUB Components resourceGeneral / Snowflake QC TrendsKubernetes / K8s Cluster Usage StatisticsKubernetes / Pod MonitoringGeneral / kubernetes-persistent-volumes General / Alerts Statistics   EMEA-STAGE - high ETAthis graph does not reflect thisGeneral / SSL Certificates and Endpoint Availability  APAC-STAGE, cloning related EMEA/GBL - a lot of strange endpoint failuersMarek/Damian - to verifyPROD deployment report:PROD deployment date:Thu Jul 18 12:57:55 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_backend_amer_prod/226/SUCCESSDeployment log:4.21.0-amer-prod-deploy.logAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/282/SUCCESSDeployment log:4.21.0-apac-prod-deploy.logEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/380/SUCCESSDeployment log:4.21.0-emea-prod-deploy.logGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/239/SUCCESSDeployment log:4.21.0-gbl-prod-deploy.logGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/292/SUCCESSDeployment log:4.21.0-gblus-prod-deploy.logPROD deploy hypercare details:Verification dateVerification byRelease on prod wasn't verified since Crowdstrike. DashboardStatusDetailsMDMHUB / MDMHUB Component errorsMDMHUB / MDMHUB KPIsMDMHUB / MDMHUB Components resourceGeneral / Snowflake QC TrendsKubernetes / K8s Cluster Usage StatisticsKubernetes / Pod MonitoringGeneral / kubernetes-persistent-volumes General / Alerts Statistics General / SSL Certificates and Endpoint Availability"
},
{
"title": "4.22.0",
"pageID": "438327818",
"pageLink": "/display/GMDM/4.22.0",
"content": "Release report:Release:4.22.0Release date:Tue Jul 23 16:32:08 UTC 2024STATUSES: SUCCESS / FAILED / REPEATEDReleased by:Krzysztof PrawdzikPlanned GO-LIVE:Thu Jul 25 (in 2 days)StageLinkStatusComments (images 600px)Build:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/19/SUCCESS CHANGELOG:http://bitbucket-insightsnow.COMPANY.com/projects/GMDM/repos/mdm-hub-inbound-services/commits/e366164c1adff5b1ccfd79dea28f068bc34a0ee2#CHANGELOG.mdUnit tests:https://jenkins-gbl-mdm-hub.COMPANY.com/job/bitbucket-mdm-hub/job/mdm-hub-inbound-services/job/develop/19/testReport/SUCCESSIntegration tests:Execution date: Tue Jul 23 17:24:15 UTC 2024Executed by: Krzysztof PrawdzikAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_amer/517/[85] SUCCESS[0] FAILED[0] REPEATEDAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_apac/457/[94] SUCCESS[8] FAILED[0] REPEATEDEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_emea/608/[90] SUCCESS[0] FAILED[0] REPEATEDGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gbl/510/[72] SUCCESS[1] FAILED[0] REPEATEDGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/MDM_integration_tests/job/int__test__kube_dev_gblus/449/[75] SUCCESS[0] FAILED[0] REPEATEDTests ready and approved:approved by: Krzysztof PrawdzikRelease ready and approved:approved by: Krzysztof PrawdzikSTAGE deployment details:STAGE deployment date:Tue Jul 23 17:23:40 UTC 2024Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20NPROD/job/deploy_mdmhub_amer_nprod_amer-stage/404/SUCCESSDeployment log:4.22.0-amer-stage-deploy.logAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20NPROD/job/deploy_mdmhub_apac_nprod_apac-stage/243/SUCCESSDeployment log:4.22.0-apac-stage-deploy.logEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20NPROD/job/deploy_mdmhub_emea_nprod_emea-stage/450/SUCCESSDeployment log:4.22.0-emea-stage-deploy.logGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20NPROD/job/deploy_mdmhub_emea_nprod_gbl-stage/264/SUCCESSDeployment log:4.22.0-gbl-prod-deploy.logGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20NPROD/job/deploy_mdmhub_amer_nprod_gblus-stage/309/SUCCESS Deployment log:4.22.0-gblus-prod-deploy.logSTAGE test phase details:Verification date25 Jul 2024 11:15 - 12:30Verification byBachanowicz, Mieczysław (Irek)  + Prawdzik, Krzysztof feat. Grygorczuk, Marek DashboardStatusDetailsMDMHUB / MDMHUB Component errors AMER-STAGE, EMEA-STAGE, known errors for OneKey DCRAPAC-STAGE, mdmhub-rawdata-exporter - too big request (6.6GB) - Elastic blocks? Whos doing that  stacktrace: StreamsException.jsonto Rafał EMEA-STAGE, mdmhub-mdm-manager, issues already reported earlier  GBL-STAGE, something with batches (UpdateHCPBatchRestRoute) - probably wrong JSON - ticket to make it more pleasant MDMHUB / MDMHUB KPIsIrek>ask Rafał - what does "Publishing latency" mean - total delay of our processing stack?MDMHUB / MDMHUB Components resource EMEA-STAGE, Batch service, more memory usage? → nothing to worry about GBLUS, api-router, more memory? → nothing to worry aboutGeneral / Snowflake QC Trends Kubernetes / K8s Cluster Usage StatisticsEMEA-NPROD, higher CPU usage, storage usage increaseKubernetes / Pod MonitoringAMER-NPROD, something is happening → batch processing, Reltio caps events to be processed which we complGeneral / kubernetes-persistent-volumes EMEA-NPROD, increasing storage usage → entity enricher working (15M events being processed)need to be verified with MarekGeneral / Alerts Statistics  APAC-NPROD,  Target down, what does it mean? We don't have such alerts → glitch in the matrixPublisher broken events - addressed in Karma by Will  reconciliation_events_threshold_exceeded?  customresource_status_condition → Related to Kafka migrationKubeJobFailedpod_crashlooping_pdks - more than usuallzookeeper_fsync_time_too_long - waiting for more dataAMER-NPRODdag_failed_nprodpod_crashlooping_hub_nprodpod_crashlooping_pdksEMEA-NPRODdag_failed_nprod customresource_status_condition -  Piotr DCR testing API - kong3_http_503_status_nprodGeneral / SSL Certificates and Endpoint AvailabilityEMEA-DEV, dcr - Piotr testingPROD deployment report:PROD deployment date:Thu Jul 25 11:07:26 UTC 2024 Deployment approved:approved by: Krzysztof PrawdzikDeployed by:Krzysztof PrawdzikENV:LinkStatusDetailsAMERhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/AMER%20PROD/job/deploy_mdmhub_amer_prod_amer-prod/344/SUCCESSDeployment log:4.22.0-amer-prod-deploy.logAPAChttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/APAC%20PROD/job/deploy_mdmhub_apac_prod_apac-prod/284/SUCCESSDeployment log:4.22.0-apac-prod-deploy.logEMEAhttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/EMEA%20PROD/job/deploy_mdmhub_emea_prod_emea-prod/382/SUCCESSDeployment log:4.22.0-emea-prod-deploy.logGBL(EX-US)https://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBL%20PROD/job/deploy_mdmhub_emea_prod_gbl-prod/241/SUCCESSDeployment log:4.22.0-gbl-prod-deploy.logGBLUShttps://jenkins-gbl-mdm-hub.COMPANY.com/job/mdm-hub-kubernetes/view/GBLUS%20PROD/job/deploy_mdmhub_amer_prod_gblus-prod/294/SUCCESSDeployment log:4.22.0-gblus-prod-deploy.logPROD deploy hypercare details:Verification date26 Jul 2024 15:30 - 16:40Verification byBachanowicz, Mieczysław (Irek) + Prawdzik, Krzysztof feat. Anuskiewicz, Piotr , Szczesny, Grzegorz DashboardStatusDetailsMDMHUB / MDMHUB Component errorsAMER-PROD, Incorrect payload on Kafka, Piotr manually moved offset to fix this. GBLUS-PROD, single error with ";" and ")" APAC-PROD, map channel:Failure not recoveredProcessing of message: KR-6687996c10e6767c9e1cab6f failed with error: Invalid format: "6/20/1970" is malformed at "/20/1970"Piotr claims that this is DLQ queue probably with single problematic event. EMEA-PROD, map-channel:400x Unexpected response: { "status": "ERROR", "status_code": 403, "error_message": "com.COMPANY.gcs.hcp.gateway.exception.RateLimitExceededException - TotalRequests Limit exceeded! (maxRequestsPerMinute=1200)" }Unexpected response: { "status": "ERROR", "status_code": 404, "error_message": "Contact not found by contact_id=a0EF000000pI8bAMAS! (market=IE)" }MDMHUB / MDMHUB KPIsWithout refactoring this dashboard, no insights can be extracted. SkippingMDMHUB / MDMHUB Components resourceGeneral / Snowflake QC TrendsEMEA-PROD, Empty COMPANYGlobalCusdtomerID - such entities are deleted at Snowflake level → nothing gets populated to downstream. Kubernetes / K8s Cluster Usage StatisticsKubernetes / Pod MonitoringAPAC-PROD, suspicious memory usage? EMEA-PROD, config deployGeneral / kubernetes-persistent-volumes General / Alerts Statistics AMER-PROD  publisher_broken_events_prodquality_gateway_auto_resolved_eventhub_callback_loopGBLUS-PROD  snowflake_last_entity_event_time_prodEMEA-PRODdag_failed_prod - existis for a long time, addressed in karma ..snowflake_generated_events_without_COMPANY_global_customer_ids_prodAPAC-PROD  pod_crashlooping_pdks - long time error in karmaGeneral / SSL Certificates and Endpoint Availability"
},
{
"title": "FAQ",
"pageID": "462236735",
"pageLink": "/display/GMDM/FAQ",
"content": "Questions and answers about HUB topics."
},
{
"title": "What is survivorship strategy in Reltio and where to find it?",
"pageID": "462236738",
"pageLink": "/pages/viewpage.action?pageId=462236738",
"content": "Simple attributes on Reltio profiles (not nested ones) have an OV attribute - showing whether the attribute value should be shown to user.Example:This HCO has two COMPANY Customer IDs (from different crosswalks) and the visible one won during calculation of survivorship strategy.The survivorship rules can be configured separately for each environment and attribute. Those are part of Reltio configuration and can be accessed here (authentication type is Bearer token):{{RELTIO_URL}}/{{tenantID}}/configurationDescription of Reltio survivorship rules:https://docs.reltio.com/en/model/consolidate-data/design-survivorship-rules/survivorship-rules"
}
]